00:00:00.001 Started by upstream project "autotest-spdk-master-vs-dpdk-v23.11" build number 1066 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3728 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.018 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.020 The recommended git tool is: git 00:00:00.020 using credential 00000000-0000-0000-0000-000000000002 00:00:00.023 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.036 Fetching changes from the remote Git repository 00:00:00.039 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.056 Using shallow fetch with depth 1 00:00:00.056 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.056 > git --version # timeout=10 00:00:00.073 > git --version # 'git version 2.39.2' 00:00:00.073 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.097 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.097 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:02.213 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:02.224 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:02.233 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:02.233 > git config core.sparsecheckout # timeout=10 00:00:02.242 > git read-tree -mu HEAD # timeout=10 00:00:02.255 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:02.271 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:02.271 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:02.606 [Pipeline] Start of Pipeline 00:00:02.617 [Pipeline] library 00:00:02.618 Loading library shm_lib@master 00:00:02.618 Library shm_lib@master is cached. Copying from home. 00:00:02.628 [Pipeline] node 00:00:02.637 Running on VM-host-WFP7 in /var/jenkins/workspace/raid-vg-autotest 00:00:02.639 [Pipeline] { 00:00:02.644 [Pipeline] catchError 00:00:02.645 [Pipeline] { 00:00:02.652 [Pipeline] wrap 00:00:02.657 [Pipeline] { 00:00:02.663 [Pipeline] stage 00:00:02.664 [Pipeline] { (Prologue) 00:00:02.675 [Pipeline] echo 00:00:02.676 Node: VM-host-WFP7 00:00:02.680 [Pipeline] cleanWs 00:00:02.689 [WS-CLEANUP] Deleting project workspace... 00:00:02.689 [WS-CLEANUP] Deferred wipeout is used... 00:00:02.695 [WS-CLEANUP] done 00:00:02.891 [Pipeline] setCustomBuildProperty 00:00:02.981 [Pipeline] httpRequest 00:00:03.308 [Pipeline] echo 00:00:03.309 Sorcerer 10.211.164.20 is alive 00:00:03.316 [Pipeline] retry 00:00:03.317 [Pipeline] { 00:00:03.328 [Pipeline] httpRequest 00:00:03.333 HttpMethod: GET 00:00:03.334 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:03.334 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:03.335 Response Code: HTTP/1.1 200 OK 00:00:03.336 Success: Status code 200 is in the accepted range: 200,404 00:00:03.336 Saving response body to /var/jenkins/workspace/raid-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:03.482 [Pipeline] } 00:00:03.494 [Pipeline] // retry 00:00:03.501 [Pipeline] sh 00:00:03.785 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:03.802 [Pipeline] httpRequest 00:00:04.107 [Pipeline] echo 00:00:04.109 Sorcerer 10.211.164.20 is alive 00:00:04.117 [Pipeline] retry 00:00:04.119 [Pipeline] { 00:00:04.131 [Pipeline] httpRequest 00:00:04.135 HttpMethod: GET 00:00:04.135 URL: http://10.211.164.20/packages/spdk_e01cb43b8578f9155d07a9bc6eee4e70a3af96b0.tar.gz 00:00:04.136 Sending request to url: http://10.211.164.20/packages/spdk_e01cb43b8578f9155d07a9bc6eee4e70a3af96b0.tar.gz 00:00:04.137 Response Code: HTTP/1.1 200 OK 00:00:04.137 Success: Status code 200 is in the accepted range: 200,404 00:00:04.137 Saving response body to /var/jenkins/workspace/raid-vg-autotest/spdk_e01cb43b8578f9155d07a9bc6eee4e70a3af96b0.tar.gz 00:00:15.231 [Pipeline] } 00:00:15.248 [Pipeline] // retry 00:00:15.256 [Pipeline] sh 00:00:15.542 + tar --no-same-owner -xf spdk_e01cb43b8578f9155d07a9bc6eee4e70a3af96b0.tar.gz 00:00:18.095 [Pipeline] sh 00:00:18.379 + git -C spdk log --oneline -n5 00:00:18.379 e01cb43b8 mk/spdk.common.mk sed the minor version 00:00:18.379 d58eef2a2 nvme/rdma: Fix reinserting qpair in connecting list after stale state 00:00:18.379 2104eacf0 test/check_so_deps: use VERSION to look for prior tags 00:00:18.379 66289a6db build: use VERSION file for storing version 00:00:18.379 626389917 nvme/rdma: Don't limit max_sge if UMR is used 00:00:18.400 [Pipeline] withCredentials 00:00:18.412 > git --version # timeout=10 00:00:18.425 > git --version # 'git version 2.39.2' 00:00:18.443 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:00:18.446 [Pipeline] { 00:00:18.455 [Pipeline] retry 00:00:18.457 [Pipeline] { 00:00:18.474 [Pipeline] sh 00:00:18.758 + git ls-remote http://dpdk.org/git/dpdk-stable v23.11 00:00:19.031 [Pipeline] } 00:00:19.049 [Pipeline] // retry 00:00:19.054 [Pipeline] } 00:00:19.070 [Pipeline] // withCredentials 00:00:19.080 [Pipeline] httpRequest 00:00:19.471 [Pipeline] echo 00:00:19.473 Sorcerer 10.211.164.20 is alive 00:00:19.483 [Pipeline] retry 00:00:19.485 [Pipeline] { 00:00:19.498 [Pipeline] httpRequest 00:00:19.503 HttpMethod: GET 00:00:19.504 URL: http://10.211.164.20/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:00:19.504 Sending request to url: http://10.211.164.20/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:00:19.506 Response Code: HTTP/1.1 200 OK 00:00:19.506 Success: Status code 200 is in the accepted range: 200,404 00:00:19.507 Saving response body to /var/jenkins/workspace/raid-vg-autotest/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:00:29.389 [Pipeline] } 00:00:29.408 [Pipeline] // retry 00:00:29.428 [Pipeline] sh 00:00:29.721 + tar --no-same-owner -xf dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:00:31.112 [Pipeline] sh 00:00:31.395 + git -C dpdk log --oneline -n5 00:00:31.395 eeb0605f11 version: 23.11.0 00:00:31.395 238778122a doc: update release notes for 23.11 00:00:31.395 46aa6b3cfc doc: fix description of RSS features 00:00:31.395 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:00:31.395 7e421ae345 devtools: support skipping forbid rule check 00:00:31.413 [Pipeline] writeFile 00:00:31.427 [Pipeline] sh 00:00:31.711 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:31.722 [Pipeline] sh 00:00:32.003 + cat autorun-spdk.conf 00:00:32.003 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:32.003 SPDK_RUN_ASAN=1 00:00:32.003 SPDK_RUN_UBSAN=1 00:00:32.004 SPDK_TEST_RAID=1 00:00:32.004 SPDK_TEST_NATIVE_DPDK=v23.11 00:00:32.004 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:00:32.004 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:32.010 RUN_NIGHTLY=1 00:00:32.012 [Pipeline] } 00:00:32.025 [Pipeline] // stage 00:00:32.038 [Pipeline] stage 00:00:32.040 [Pipeline] { (Run VM) 00:00:32.053 [Pipeline] sh 00:00:32.337 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:32.337 + echo 'Start stage prepare_nvme.sh' 00:00:32.337 Start stage prepare_nvme.sh 00:00:32.337 + [[ -n 1 ]] 00:00:32.337 + disk_prefix=ex1 00:00:32.337 + [[ -n /var/jenkins/workspace/raid-vg-autotest ]] 00:00:32.337 + [[ -e /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf ]] 00:00:32.337 + source /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf 00:00:32.337 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:32.337 ++ SPDK_RUN_ASAN=1 00:00:32.337 ++ SPDK_RUN_UBSAN=1 00:00:32.337 ++ SPDK_TEST_RAID=1 00:00:32.337 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:00:32.337 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:00:32.337 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:32.337 ++ RUN_NIGHTLY=1 00:00:32.337 + cd /var/jenkins/workspace/raid-vg-autotest 00:00:32.337 + nvme_files=() 00:00:32.337 + declare -A nvme_files 00:00:32.337 + backend_dir=/var/lib/libvirt/images/backends 00:00:32.337 + nvme_files['nvme.img']=5G 00:00:32.337 + nvme_files['nvme-cmb.img']=5G 00:00:32.337 + nvme_files['nvme-multi0.img']=4G 00:00:32.337 + nvme_files['nvme-multi1.img']=4G 00:00:32.337 + nvme_files['nvme-multi2.img']=4G 00:00:32.337 + nvme_files['nvme-openstack.img']=8G 00:00:32.337 + nvme_files['nvme-zns.img']=5G 00:00:32.337 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:32.337 + (( SPDK_TEST_FTL == 1 )) 00:00:32.337 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:32.337 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:32.337 + for nvme in "${!nvme_files[@]}" 00:00:32.337 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi2.img -s 4G 00:00:32.337 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:32.337 + for nvme in "${!nvme_files[@]}" 00:00:32.337 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-cmb.img -s 5G 00:00:32.337 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:32.337 + for nvme in "${!nvme_files[@]}" 00:00:32.337 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-openstack.img -s 8G 00:00:32.337 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:32.337 + for nvme in "${!nvme_files[@]}" 00:00:32.337 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-zns.img -s 5G 00:00:32.337 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:32.337 + for nvme in "${!nvme_files[@]}" 00:00:32.337 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi1.img -s 4G 00:00:32.337 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:32.337 + for nvme in "${!nvme_files[@]}" 00:00:32.337 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi0.img -s 4G 00:00:32.337 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:32.597 + for nvme in "${!nvme_files[@]}" 00:00:32.597 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme.img -s 5G 00:00:32.597 Formatting '/var/lib/libvirt/images/backends/ex1-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:32.597 ++ sudo grep -rl ex1-nvme.img /etc/libvirt/qemu 00:00:32.597 + echo 'End stage prepare_nvme.sh' 00:00:32.597 End stage prepare_nvme.sh 00:00:32.608 [Pipeline] sh 00:00:32.890 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:32.890 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 -b /var/lib/libvirt/images/backends/ex1-nvme.img -b /var/lib/libvirt/images/backends/ex1-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img -H -a -v -f fedora39 00:00:32.890 00:00:32.890 DIR=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant 00:00:32.890 SPDK_DIR=/var/jenkins/workspace/raid-vg-autotest/spdk 00:00:32.890 VAGRANT_TARGET=/var/jenkins/workspace/raid-vg-autotest 00:00:32.890 HELP=0 00:00:32.890 DRY_RUN=0 00:00:32.890 NVME_FILE=/var/lib/libvirt/images/backends/ex1-nvme.img,/var/lib/libvirt/images/backends/ex1-nvme-multi0.img, 00:00:32.890 NVME_DISKS_TYPE=nvme,nvme, 00:00:32.890 NVME_AUTO_CREATE=0 00:00:32.890 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img, 00:00:32.890 NVME_CMB=,, 00:00:32.890 NVME_PMR=,, 00:00:32.890 NVME_ZNS=,, 00:00:32.890 NVME_MS=,, 00:00:32.890 NVME_FDP=,, 00:00:32.890 SPDK_VAGRANT_DISTRO=fedora39 00:00:32.890 SPDK_VAGRANT_VMCPU=10 00:00:32.890 SPDK_VAGRANT_VMRAM=12288 00:00:32.890 SPDK_VAGRANT_PROVIDER=libvirt 00:00:32.890 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:00:32.890 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:32.890 SPDK_OPENSTACK_NETWORK=0 00:00:32.890 VAGRANT_PACKAGE_BOX=0 00:00:32.890 VAGRANTFILE=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:00:32.890 FORCE_DISTRO=true 00:00:32.890 VAGRANT_BOX_VERSION= 00:00:32.890 EXTRA_VAGRANTFILES= 00:00:32.890 NIC_MODEL=virtio 00:00:32.890 00:00:32.890 mkdir: created directory '/var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt' 00:00:32.890 /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt /var/jenkins/workspace/raid-vg-autotest 00:00:34.819 Bringing machine 'default' up with 'libvirt' provider... 00:00:35.390 ==> default: Creating image (snapshot of base box volume). 00:00:35.390 ==> default: Creating domain with the following settings... 00:00:35.390 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1734287495_83e792c62bdcae254e3c 00:00:35.390 ==> default: -- Domain type: kvm 00:00:35.390 ==> default: -- Cpus: 10 00:00:35.390 ==> default: -- Feature: acpi 00:00:35.391 ==> default: -- Feature: apic 00:00:35.391 ==> default: -- Feature: pae 00:00:35.391 ==> default: -- Memory: 12288M 00:00:35.391 ==> default: -- Memory Backing: hugepages: 00:00:35.391 ==> default: -- Management MAC: 00:00:35.391 ==> default: -- Loader: 00:00:35.391 ==> default: -- Nvram: 00:00:35.391 ==> default: -- Base box: spdk/fedora39 00:00:35.391 ==> default: -- Storage pool: default 00:00:35.391 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1734287495_83e792c62bdcae254e3c.img (20G) 00:00:35.391 ==> default: -- Volume Cache: default 00:00:35.391 ==> default: -- Kernel: 00:00:35.391 ==> default: -- Initrd: 00:00:35.391 ==> default: -- Graphics Type: vnc 00:00:35.391 ==> default: -- Graphics Port: -1 00:00:35.391 ==> default: -- Graphics IP: 127.0.0.1 00:00:35.391 ==> default: -- Graphics Password: Not defined 00:00:35.391 ==> default: -- Video Type: cirrus 00:00:35.391 ==> default: -- Video VRAM: 9216 00:00:35.391 ==> default: -- Sound Type: 00:00:35.391 ==> default: -- Keymap: en-us 00:00:35.391 ==> default: -- TPM Path: 00:00:35.391 ==> default: -- INPUT: type=mouse, bus=ps2 00:00:35.391 ==> default: -- Command line args: 00:00:35.391 ==> default: -> value=-device, 00:00:35.391 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:00:35.391 ==> default: -> value=-drive, 00:00:35.391 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme.img,if=none,id=nvme-0-drive0, 00:00:35.391 ==> default: -> value=-device, 00:00:35.391 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:35.391 ==> default: -> value=-device, 00:00:35.391 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:00:35.391 ==> default: -> value=-drive, 00:00:35.391 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:00:35.391 ==> default: -> value=-device, 00:00:35.391 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:35.391 ==> default: -> value=-drive, 00:00:35.391 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:00:35.391 ==> default: -> value=-device, 00:00:35.391 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:35.391 ==> default: -> value=-drive, 00:00:35.391 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:00:35.391 ==> default: -> value=-device, 00:00:35.391 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:35.650 ==> default: Creating shared folders metadata... 00:00:35.650 ==> default: Starting domain. 00:00:37.031 ==> default: Waiting for domain to get an IP address... 00:00:55.127 ==> default: Waiting for SSH to become available... 00:00:55.127 ==> default: Configuring and enabling network interfaces... 00:01:00.407 default: SSH address: 192.168.121.208:22 00:01:00.407 default: SSH username: vagrant 00:01:00.407 default: SSH auth method: private key 00:01:03.705 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:10.283 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:01:16.861 ==> default: Mounting SSHFS shared folder... 00:01:19.485 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:19.485 ==> default: Checking Mount.. 00:01:20.871 ==> default: Folder Successfully Mounted! 00:01:20.871 ==> default: Running provisioner: file... 00:01:22.253 default: ~/.gitconfig => .gitconfig 00:01:22.823 00:01:22.823 SUCCESS! 00:01:22.823 00:01:22.823 cd to /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:01:22.823 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:22.823 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:01:22.823 00:01:22.833 [Pipeline] } 00:01:22.846 [Pipeline] // stage 00:01:22.853 [Pipeline] dir 00:01:22.854 Running in /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt 00:01:22.855 [Pipeline] { 00:01:22.866 [Pipeline] catchError 00:01:22.867 [Pipeline] { 00:01:22.879 [Pipeline] sh 00:01:23.162 + vagrant ssh-config --host vagrant 00:01:23.162 + sed -ne /^Host/,$p 00:01:23.162 + tee ssh_conf 00:01:25.702 Host vagrant 00:01:25.702 HostName 192.168.121.208 00:01:25.702 User vagrant 00:01:25.702 Port 22 00:01:25.702 UserKnownHostsFile /dev/null 00:01:25.702 StrictHostKeyChecking no 00:01:25.702 PasswordAuthentication no 00:01:25.702 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:01:25.702 IdentitiesOnly yes 00:01:25.702 LogLevel FATAL 00:01:25.702 ForwardAgent yes 00:01:25.702 ForwardX11 yes 00:01:25.702 00:01:25.717 [Pipeline] withEnv 00:01:25.720 [Pipeline] { 00:01:25.733 [Pipeline] sh 00:01:26.017 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:26.017 source /etc/os-release 00:01:26.017 [[ -e /image.version ]] && img=$(< /image.version) 00:01:26.017 # Minimal, systemd-like check. 00:01:26.017 if [[ -e /.dockerenv ]]; then 00:01:26.017 # Clear garbage from the node's name: 00:01:26.017 # agt-er_autotest_547-896 -> autotest_547-896 00:01:26.017 # $HOSTNAME is the actual container id 00:01:26.017 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:26.017 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:26.017 # We can assume this is a mount from a host where container is running, 00:01:26.017 # so fetch its hostname to easily identify the target swarm worker. 00:01:26.017 container="$(< /etc/hostname) ($agent)" 00:01:26.017 else 00:01:26.017 # Fallback 00:01:26.017 container=$agent 00:01:26.017 fi 00:01:26.017 fi 00:01:26.018 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:26.018 00:01:26.291 [Pipeline] } 00:01:26.307 [Pipeline] // withEnv 00:01:26.316 [Pipeline] setCustomBuildProperty 00:01:26.332 [Pipeline] stage 00:01:26.334 [Pipeline] { (Tests) 00:01:26.350 [Pipeline] sh 00:01:26.634 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:26.908 [Pipeline] sh 00:01:27.192 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:27.468 [Pipeline] timeout 00:01:27.468 Timeout set to expire in 1 hr 30 min 00:01:27.470 [Pipeline] { 00:01:27.483 [Pipeline] sh 00:01:27.768 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:28.338 HEAD is now at e01cb43b8 mk/spdk.common.mk sed the minor version 00:01:28.350 [Pipeline] sh 00:01:28.634 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:28.908 [Pipeline] sh 00:01:29.192 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:29.467 [Pipeline] sh 00:01:29.750 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=raid-vg-autotest ./autoruner.sh spdk_repo 00:01:30.010 ++ readlink -f spdk_repo 00:01:30.010 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:30.010 + [[ -n /home/vagrant/spdk_repo ]] 00:01:30.010 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:30.010 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:30.010 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:30.010 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:30.010 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:30.010 + [[ raid-vg-autotest == pkgdep-* ]] 00:01:30.010 + cd /home/vagrant/spdk_repo 00:01:30.010 + source /etc/os-release 00:01:30.010 ++ NAME='Fedora Linux' 00:01:30.010 ++ VERSION='39 (Cloud Edition)' 00:01:30.010 ++ ID=fedora 00:01:30.010 ++ VERSION_ID=39 00:01:30.010 ++ VERSION_CODENAME= 00:01:30.010 ++ PLATFORM_ID=platform:f39 00:01:30.010 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:30.010 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:30.010 ++ LOGO=fedora-logo-icon 00:01:30.010 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:30.010 ++ HOME_URL=https://fedoraproject.org/ 00:01:30.010 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:30.010 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:30.010 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:30.010 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:30.010 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:30.010 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:30.010 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:30.010 ++ SUPPORT_END=2024-11-12 00:01:30.010 ++ VARIANT='Cloud Edition' 00:01:30.010 ++ VARIANT_ID=cloud 00:01:30.010 + uname -a 00:01:30.010 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:30.010 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:30.581 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:30.581 Hugepages 00:01:30.581 node hugesize free / total 00:01:30.581 node0 1048576kB 0 / 0 00:01:30.581 node0 2048kB 0 / 0 00:01:30.581 00:01:30.581 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:30.581 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:30.581 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:30.581 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:01:30.581 + rm -f /tmp/spdk-ld-path 00:01:30.581 + source autorun-spdk.conf 00:01:30.581 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:30.581 ++ SPDK_RUN_ASAN=1 00:01:30.581 ++ SPDK_RUN_UBSAN=1 00:01:30.581 ++ SPDK_TEST_RAID=1 00:01:30.581 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:01:30.581 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:30.581 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:30.581 ++ RUN_NIGHTLY=1 00:01:30.581 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:30.581 + [[ -n '' ]] 00:01:30.581 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:30.841 + for M in /var/spdk/build-*-manifest.txt 00:01:30.841 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:30.841 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:30.841 + for M in /var/spdk/build-*-manifest.txt 00:01:30.841 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:30.841 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:30.841 + for M in /var/spdk/build-*-manifest.txt 00:01:30.841 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:30.841 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:30.841 ++ uname 00:01:30.841 + [[ Linux == \L\i\n\u\x ]] 00:01:30.841 + sudo dmesg -T 00:01:30.841 + sudo dmesg --clear 00:01:30.841 + dmesg_pid=6154 00:01:30.841 + [[ Fedora Linux == FreeBSD ]] 00:01:30.841 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:30.841 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:30.841 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:30.841 + sudo dmesg -Tw 00:01:30.841 + [[ -x /usr/src/fio-static/fio ]] 00:01:30.841 + export FIO_BIN=/usr/src/fio-static/fio 00:01:30.841 + FIO_BIN=/usr/src/fio-static/fio 00:01:30.841 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:30.841 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:30.841 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:30.841 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:30.841 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:30.841 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:30.841 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:30.841 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:30.841 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:31.101 18:32:31 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:01:31.101 18:32:31 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:31.101 18:32:31 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:31.101 18:32:31 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_RUN_ASAN=1 00:01:31.101 18:32:31 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_RUN_UBSAN=1 00:01:31.101 18:32:31 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_RAID=1 00:01:31.101 18:32:31 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_TEST_NATIVE_DPDK=v23.11 00:01:31.101 18:32:31 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:31.101 18:32:31 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:31.101 18:32:31 -- spdk_repo/autorun-spdk.conf@8 -- $ RUN_NIGHTLY=1 00:01:31.101 18:32:31 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:31.101 18:32:31 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:31.101 18:32:31 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:01:31.101 18:32:31 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:31.101 18:32:31 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:31.101 18:32:31 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:31.101 18:32:31 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:31.101 18:32:31 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:31.101 18:32:31 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:31.101 18:32:31 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:31.101 18:32:31 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:31.101 18:32:31 -- paths/export.sh@5 -- $ export PATH 00:01:31.101 18:32:31 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:31.101 18:32:31 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:31.101 18:32:31 -- common/autobuild_common.sh@493 -- $ date +%s 00:01:31.101 18:32:31 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1734287551.XXXXXX 00:01:31.101 18:32:31 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1734287551.1LenE3 00:01:31.101 18:32:31 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:01:31.101 18:32:31 -- common/autobuild_common.sh@499 -- $ '[' -n v23.11 ']' 00:01:31.101 18:32:31 -- common/autobuild_common.sh@500 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:01:31.101 18:32:31 -- common/autobuild_common.sh@500 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:01:31.101 18:32:31 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:31.101 18:32:31 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:31.101 18:32:31 -- common/autobuild_common.sh@509 -- $ get_config_params 00:01:31.101 18:32:31 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:01:31.101 18:32:31 -- common/autotest_common.sh@10 -- $ set +x 00:01:31.101 18:32:31 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:01:31.101 18:32:31 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:01:31.101 18:32:31 -- pm/common@17 -- $ local monitor 00:01:31.101 18:32:31 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:31.101 18:32:31 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:31.101 18:32:31 -- pm/common@25 -- $ sleep 1 00:01:31.101 18:32:31 -- pm/common@21 -- $ date +%s 00:01:31.101 18:32:31 -- pm/common@21 -- $ date +%s 00:01:31.101 18:32:31 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1734287551 00:01:31.101 18:32:31 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1734287551 00:01:31.101 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1734287551_collect-vmstat.pm.log 00:01:31.101 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1734287551_collect-cpu-load.pm.log 00:01:32.040 18:32:32 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:01:32.040 18:32:32 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:32.040 18:32:32 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:32.040 18:32:32 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:32.040 18:32:32 -- spdk/autobuild.sh@16 -- $ date -u 00:01:32.040 Sun Dec 15 06:32:32 PM UTC 2024 00:01:32.040 18:32:32 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:32.040 v25.01-rc1-2-ge01cb43b8 00:01:32.040 18:32:32 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:32.040 18:32:32 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:32.040 18:32:32 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:32.040 18:32:32 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:32.040 18:32:32 -- common/autotest_common.sh@10 -- $ set +x 00:01:32.309 ************************************ 00:01:32.309 START TEST asan 00:01:32.309 ************************************ 00:01:32.309 using asan 00:01:32.309 18:32:32 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:01:32.309 00:01:32.309 real 0m0.001s 00:01:32.309 user 0m0.000s 00:01:32.309 sys 0m0.000s 00:01:32.309 18:32:32 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:32.309 18:32:32 asan -- common/autotest_common.sh@10 -- $ set +x 00:01:32.309 ************************************ 00:01:32.309 END TEST asan 00:01:32.309 ************************************ 00:01:32.309 18:32:32 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:32.309 18:32:32 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:32.309 18:32:32 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:32.309 18:32:32 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:32.309 18:32:32 -- common/autotest_common.sh@10 -- $ set +x 00:01:32.309 ************************************ 00:01:32.309 START TEST ubsan 00:01:32.309 ************************************ 00:01:32.309 using ubsan 00:01:32.309 18:32:32 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:01:32.309 00:01:32.309 real 0m0.001s 00:01:32.309 user 0m0.000s 00:01:32.309 sys 0m0.000s 00:01:32.309 18:32:32 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:32.309 18:32:32 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:32.309 ************************************ 00:01:32.309 END TEST ubsan 00:01:32.309 ************************************ 00:01:32.309 18:32:32 -- spdk/autobuild.sh@27 -- $ '[' -n v23.11 ']' 00:01:32.309 18:32:32 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:01:32.309 18:32:32 -- common/autobuild_common.sh@449 -- $ run_test build_native_dpdk _build_native_dpdk 00:01:32.309 18:32:32 -- common/autotest_common.sh@1105 -- $ '[' 2 -le 1 ']' 00:01:32.309 18:32:32 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:32.309 18:32:32 -- common/autotest_common.sh@10 -- $ set +x 00:01:32.309 ************************************ 00:01:32.309 START TEST build_native_dpdk 00:01:32.309 ************************************ 00:01:32.309 18:32:32 build_native_dpdk -- common/autotest_common.sh@1129 -- $ _build_native_dpdk 00:01:32.309 18:32:32 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:01:32.309 18:32:32 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:01:32.309 18:32:32 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:01:32.309 18:32:32 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:01:32.309 18:32:32 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:01:32.309 18:32:32 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:01:32.309 18:32:32 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:01:32.309 18:32:32 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:01:32.309 18:32:32 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:01:32.309 18:32:32 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:01:32.309 18:32:32 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:01:32.309 18:32:32 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:01:32.309 18:32:32 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:01:32.309 18:32:32 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:01:32.309 18:32:32 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:01:32.309 18:32:32 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:01:32.309 18:32:32 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:01:32.309 18:32:32 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:01:32.309 18:32:32 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:01:32.309 18:32:32 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:01:32.309 eeb0605f11 version: 23.11.0 00:01:32.309 238778122a doc: update release notes for 23.11 00:01:32.309 46aa6b3cfc doc: fix description of RSS features 00:01:32.309 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:01:32.309 7e421ae345 devtools: support skipping forbid rule check 00:01:32.309 18:32:32 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:01:32.309 18:32:32 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:01:32.309 18:32:32 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=23.11.0 00:01:32.309 18:32:32 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:01:32.309 18:32:32 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:01:32.309 18:32:32 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:01:32.309 18:32:32 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:01:32.309 18:32:32 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:01:32.309 18:32:32 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:01:32.309 18:32:32 build_native_dpdk -- common/autobuild_common.sh@102 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base" "power/acpi" "power/amd_pstate" "power/cppc" "power/intel_pstate" "power/intel_uncore" "power/kvm_vm") 00:01:32.309 18:32:32 build_native_dpdk -- common/autobuild_common.sh@103 -- $ local mlx5_libs_added=n 00:01:32.309 18:32:32 build_native_dpdk -- common/autobuild_common.sh@104 -- $ [[ 0 -eq 1 ]] 00:01:32.309 18:32:32 build_native_dpdk -- common/autobuild_common.sh@104 -- $ [[ 0 -eq 1 ]] 00:01:32.309 18:32:32 build_native_dpdk -- common/autobuild_common.sh@146 -- $ [[ 0 -eq 1 ]] 00:01:32.309 18:32:32 build_native_dpdk -- common/autobuild_common.sh@174 -- $ cd /home/vagrant/spdk_repo/dpdk 00:01:32.309 18:32:32 build_native_dpdk -- common/autobuild_common.sh@175 -- $ uname -s 00:01:32.309 18:32:32 build_native_dpdk -- common/autobuild_common.sh@175 -- $ '[' Linux = Linux ']' 00:01:32.309 18:32:32 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 23.11.0 21.11.0 00:01:32.309 18:32:32 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 23.11.0 '<' 21.11.0 00:01:32.309 18:32:32 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:01:32.309 18:32:32 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:01:32.309 18:32:32 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:01:32.309 18:32:32 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:01:32.309 18:32:32 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:01:32.309 18:32:32 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:01:32.309 18:32:32 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:01:32.309 18:32:32 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:01:32.309 18:32:32 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:01:32.309 18:32:32 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:01:32.309 18:32:32 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:01:32.309 18:32:32 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:01:32.309 18:32:32 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:01:32.309 18:32:32 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:32.309 18:32:32 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:01:32.309 18:32:32 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:01:32.309 18:32:32 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:01:32.309 18:32:32 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:01:32.309 18:32:32 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:01:32.309 18:32:32 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 21 00:01:32.310 18:32:32 build_native_dpdk -- scripts/common.sh@353 -- $ local d=21 00:01:32.310 18:32:32 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:01:32.310 18:32:32 build_native_dpdk -- scripts/common.sh@355 -- $ echo 21 00:01:32.310 18:32:32 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=21 00:01:32.310 18:32:32 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:01:32.310 18:32:32 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:01:32.310 18:32:32 build_native_dpdk -- common/autobuild_common.sh@180 -- $ patch -p1 00:01:32.310 patching file config/rte_config.h 00:01:32.310 Hunk #1 succeeded at 60 (offset 1 line). 00:01:32.310 18:32:32 build_native_dpdk -- common/autobuild_common.sh@183 -- $ lt 23.11.0 24.07.0 00:01:32.310 18:32:32 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 23.11.0 '<' 24.07.0 00:01:32.310 18:32:32 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:01:32.310 18:32:32 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:01:32.310 18:32:32 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:01:32.310 18:32:32 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:01:32.310 18:32:32 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:01:32.310 18:32:32 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:01:32.310 18:32:32 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:01:32.310 18:32:32 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:01:32.310 18:32:32 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:01:32.310 18:32:32 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:01:32.310 18:32:32 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:01:32.310 18:32:32 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:01:32.310 18:32:32 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:01:32.310 18:32:32 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:32.310 18:32:32 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:01:32.310 18:32:32 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:01:32.310 18:32:32 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:01:32.310 18:32:32 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:01:32.310 18:32:32 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:01:32.310 18:32:32 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:01:32.310 18:32:32 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:01:32.310 18:32:32 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:01:32.310 18:32:32 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:01:32.310 18:32:32 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:01:32.310 18:32:32 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:01:32.310 18:32:32 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:01:32.310 18:32:32 build_native_dpdk -- scripts/common.sh@368 -- $ return 0 00:01:32.310 18:32:32 build_native_dpdk -- common/autobuild_common.sh@184 -- $ patch -p1 00:01:32.310 patching file lib/pcapng/rte_pcapng.c 00:01:32.310 18:32:32 build_native_dpdk -- common/autobuild_common.sh@186 -- $ ge 23.11.0 24.07.0 00:01:32.310 18:32:32 build_native_dpdk -- scripts/common.sh@376 -- $ cmp_versions 23.11.0 '>=' 24.07.0 00:01:32.310 18:32:32 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:01:32.310 18:32:32 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:01:32.310 18:32:32 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:01:32.310 18:32:32 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:01:32.310 18:32:32 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:01:32.310 18:32:32 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:01:32.310 18:32:32 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=>=' 00:01:32.310 18:32:32 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:01:32.310 18:32:32 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:01:32.310 18:32:32 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:01:32.310 18:32:32 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:01:32.310 18:32:32 build_native_dpdk -- scripts/common.sh@348 -- $ : 1 00:01:32.310 18:32:32 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:01:32.310 18:32:32 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:32.310 18:32:32 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:01:32.583 18:32:32 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:01:32.583 18:32:32 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:01:32.583 18:32:32 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:01:32.583 18:32:32 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:01:32.583 18:32:32 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:01:32.583 18:32:32 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:01:32.583 18:32:32 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:01:32.583 18:32:32 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:01:32.583 18:32:32 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:01:32.583 18:32:32 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:01:32.583 18:32:32 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:01:32.583 18:32:32 build_native_dpdk -- scripts/common.sh@368 -- $ return 1 00:01:32.583 18:32:32 build_native_dpdk -- common/autobuild_common.sh@190 -- $ dpdk_kmods=false 00:01:32.583 18:32:32 build_native_dpdk -- common/autobuild_common.sh@191 -- $ uname -s 00:01:32.583 18:32:32 build_native_dpdk -- common/autobuild_common.sh@191 -- $ '[' Linux = FreeBSD ']' 00:01:32.583 18:32:32 build_native_dpdk -- common/autobuild_common.sh@195 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base power/acpi power/amd_pstate power/cppc power/intel_pstate power/intel_uncore power/kvm_vm 00:01:32.583 18:32:32 build_native_dpdk -- common/autobuild_common.sh@195 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm, 00:01:39.159 The Meson build system 00:01:39.159 Version: 1.5.0 00:01:39.159 Source dir: /home/vagrant/spdk_repo/dpdk 00:01:39.159 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:01:39.159 Build type: native build 00:01:39.159 Program cat found: YES (/usr/bin/cat) 00:01:39.159 Project name: DPDK 00:01:39.159 Project version: 23.11.0 00:01:39.159 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:39.159 C linker for the host machine: gcc ld.bfd 2.40-14 00:01:39.159 Host machine cpu family: x86_64 00:01:39.159 Host machine cpu: x86_64 00:01:39.159 Message: ## Building in Developer Mode ## 00:01:39.159 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:39.159 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:01:39.159 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:01:39.159 Program python3 found: YES (/usr/bin/python3) 00:01:39.159 Program cat found: YES (/usr/bin/cat) 00:01:39.159 config/meson.build:113: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:01:39.159 Compiler for C supports arguments -march=native: YES 00:01:39.159 Checking for size of "void *" : 8 00:01:39.159 Checking for size of "void *" : 8 (cached) 00:01:39.159 Library m found: YES 00:01:39.159 Library numa found: YES 00:01:39.159 Has header "numaif.h" : YES 00:01:39.159 Library fdt found: NO 00:01:39.159 Library execinfo found: NO 00:01:39.159 Has header "execinfo.h" : YES 00:01:39.159 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:39.159 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:39.159 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:39.159 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:39.159 Run-time dependency openssl found: YES 3.1.1 00:01:39.159 Run-time dependency libpcap found: YES 1.10.4 00:01:39.159 Has header "pcap.h" with dependency libpcap: YES 00:01:39.159 Compiler for C supports arguments -Wcast-qual: YES 00:01:39.159 Compiler for C supports arguments -Wdeprecated: YES 00:01:39.159 Compiler for C supports arguments -Wformat: YES 00:01:39.159 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:39.159 Compiler for C supports arguments -Wformat-security: NO 00:01:39.159 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:39.159 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:39.159 Compiler for C supports arguments -Wnested-externs: YES 00:01:39.159 Compiler for C supports arguments -Wold-style-definition: YES 00:01:39.159 Compiler for C supports arguments -Wpointer-arith: YES 00:01:39.159 Compiler for C supports arguments -Wsign-compare: YES 00:01:39.159 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:39.159 Compiler for C supports arguments -Wundef: YES 00:01:39.159 Compiler for C supports arguments -Wwrite-strings: YES 00:01:39.159 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:39.159 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:39.159 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:39.159 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:39.159 Program objdump found: YES (/usr/bin/objdump) 00:01:39.159 Compiler for C supports arguments -mavx512f: YES 00:01:39.159 Checking if "AVX512 checking" compiles: YES 00:01:39.159 Fetching value of define "__SSE4_2__" : 1 00:01:39.159 Fetching value of define "__AES__" : 1 00:01:39.159 Fetching value of define "__AVX__" : 1 00:01:39.159 Fetching value of define "__AVX2__" : 1 00:01:39.159 Fetching value of define "__AVX512BW__" : 1 00:01:39.159 Fetching value of define "__AVX512CD__" : 1 00:01:39.159 Fetching value of define "__AVX512DQ__" : 1 00:01:39.159 Fetching value of define "__AVX512F__" : 1 00:01:39.159 Fetching value of define "__AVX512VL__" : 1 00:01:39.159 Fetching value of define "__PCLMUL__" : 1 00:01:39.159 Fetching value of define "__RDRND__" : 1 00:01:39.159 Fetching value of define "__RDSEED__" : 1 00:01:39.159 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:39.159 Fetching value of define "__znver1__" : (undefined) 00:01:39.159 Fetching value of define "__znver2__" : (undefined) 00:01:39.159 Fetching value of define "__znver3__" : (undefined) 00:01:39.159 Fetching value of define "__znver4__" : (undefined) 00:01:39.159 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:39.159 Message: lib/log: Defining dependency "log" 00:01:39.159 Message: lib/kvargs: Defining dependency "kvargs" 00:01:39.159 Message: lib/telemetry: Defining dependency "telemetry" 00:01:39.159 Checking for function "getentropy" : NO 00:01:39.159 Message: lib/eal: Defining dependency "eal" 00:01:39.159 Message: lib/ring: Defining dependency "ring" 00:01:39.159 Message: lib/rcu: Defining dependency "rcu" 00:01:39.159 Message: lib/mempool: Defining dependency "mempool" 00:01:39.159 Message: lib/mbuf: Defining dependency "mbuf" 00:01:39.159 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:39.159 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:39.159 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:39.159 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:39.159 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:39.159 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:01:39.159 Compiler for C supports arguments -mpclmul: YES 00:01:39.159 Compiler for C supports arguments -maes: YES 00:01:39.159 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:39.159 Compiler for C supports arguments -mavx512bw: YES 00:01:39.159 Compiler for C supports arguments -mavx512dq: YES 00:01:39.159 Compiler for C supports arguments -mavx512vl: YES 00:01:39.159 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:39.159 Compiler for C supports arguments -mavx2: YES 00:01:39.159 Compiler for C supports arguments -mavx: YES 00:01:39.159 Message: lib/net: Defining dependency "net" 00:01:39.159 Message: lib/meter: Defining dependency "meter" 00:01:39.159 Message: lib/ethdev: Defining dependency "ethdev" 00:01:39.159 Message: lib/pci: Defining dependency "pci" 00:01:39.159 Message: lib/cmdline: Defining dependency "cmdline" 00:01:39.159 Message: lib/metrics: Defining dependency "metrics" 00:01:39.159 Message: lib/hash: Defining dependency "hash" 00:01:39.159 Message: lib/timer: Defining dependency "timer" 00:01:39.159 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:39.159 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:39.159 Fetching value of define "__AVX512CD__" : 1 (cached) 00:01:39.159 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:39.159 Message: lib/acl: Defining dependency "acl" 00:01:39.160 Message: lib/bbdev: Defining dependency "bbdev" 00:01:39.160 Message: lib/bitratestats: Defining dependency "bitratestats" 00:01:39.160 Run-time dependency libelf found: YES 0.191 00:01:39.160 Message: lib/bpf: Defining dependency "bpf" 00:01:39.160 Message: lib/cfgfile: Defining dependency "cfgfile" 00:01:39.160 Message: lib/compressdev: Defining dependency "compressdev" 00:01:39.160 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:39.160 Message: lib/distributor: Defining dependency "distributor" 00:01:39.160 Message: lib/dmadev: Defining dependency "dmadev" 00:01:39.160 Message: lib/efd: Defining dependency "efd" 00:01:39.160 Message: lib/eventdev: Defining dependency "eventdev" 00:01:39.160 Message: lib/dispatcher: Defining dependency "dispatcher" 00:01:39.160 Message: lib/gpudev: Defining dependency "gpudev" 00:01:39.160 Message: lib/gro: Defining dependency "gro" 00:01:39.160 Message: lib/gso: Defining dependency "gso" 00:01:39.160 Message: lib/ip_frag: Defining dependency "ip_frag" 00:01:39.160 Message: lib/jobstats: Defining dependency "jobstats" 00:01:39.160 Message: lib/latencystats: Defining dependency "latencystats" 00:01:39.160 Message: lib/lpm: Defining dependency "lpm" 00:01:39.160 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:39.160 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:39.160 Fetching value of define "__AVX512IFMA__" : (undefined) 00:01:39.160 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:01:39.160 Message: lib/member: Defining dependency "member" 00:01:39.160 Message: lib/pcapng: Defining dependency "pcapng" 00:01:39.160 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:39.160 Message: lib/power: Defining dependency "power" 00:01:39.160 Message: lib/rawdev: Defining dependency "rawdev" 00:01:39.160 Message: lib/regexdev: Defining dependency "regexdev" 00:01:39.160 Message: lib/mldev: Defining dependency "mldev" 00:01:39.160 Message: lib/rib: Defining dependency "rib" 00:01:39.160 Message: lib/reorder: Defining dependency "reorder" 00:01:39.160 Message: lib/sched: Defining dependency "sched" 00:01:39.160 Message: lib/security: Defining dependency "security" 00:01:39.160 Message: lib/stack: Defining dependency "stack" 00:01:39.160 Has header "linux/userfaultfd.h" : YES 00:01:39.160 Has header "linux/vduse.h" : YES 00:01:39.160 Message: lib/vhost: Defining dependency "vhost" 00:01:39.160 Message: lib/ipsec: Defining dependency "ipsec" 00:01:39.160 Message: lib/pdcp: Defining dependency "pdcp" 00:01:39.160 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:39.160 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:39.160 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:39.160 Message: lib/fib: Defining dependency "fib" 00:01:39.160 Message: lib/port: Defining dependency "port" 00:01:39.160 Message: lib/pdump: Defining dependency "pdump" 00:01:39.160 Message: lib/table: Defining dependency "table" 00:01:39.160 Message: lib/pipeline: Defining dependency "pipeline" 00:01:39.160 Message: lib/graph: Defining dependency "graph" 00:01:39.160 Message: lib/node: Defining dependency "node" 00:01:39.160 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:39.160 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:39.160 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:40.118 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:40.118 Compiler for C supports arguments -Wno-sign-compare: YES 00:01:40.118 Compiler for C supports arguments -Wno-unused-value: YES 00:01:40.118 Compiler for C supports arguments -Wno-format: YES 00:01:40.118 Compiler for C supports arguments -Wno-format-security: YES 00:01:40.118 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:01:40.118 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:01:40.118 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:01:40.118 Compiler for C supports arguments -Wno-unused-parameter: YES 00:01:40.118 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:40.118 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:40.118 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:40.118 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:40.118 Compiler for C supports arguments -march=skylake-avx512: YES 00:01:40.118 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:01:40.118 Has header "sys/epoll.h" : YES 00:01:40.118 Program doxygen found: YES (/usr/local/bin/doxygen) 00:01:40.118 Configuring doxy-api-html.conf using configuration 00:01:40.118 Configuring doxy-api-man.conf using configuration 00:01:40.118 Program mandb found: YES (/usr/bin/mandb) 00:01:40.118 Program sphinx-build found: NO 00:01:40.118 Configuring rte_build_config.h using configuration 00:01:40.118 Message: 00:01:40.118 ================= 00:01:40.118 Applications Enabled 00:01:40.118 ================= 00:01:40.118 00:01:40.118 apps: 00:01:40.118 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:01:40.118 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:01:40.118 test-pmd, test-regex, test-sad, test-security-perf, 00:01:40.118 00:01:40.118 Message: 00:01:40.118 ================= 00:01:40.118 Libraries Enabled 00:01:40.118 ================= 00:01:40.118 00:01:40.118 libs: 00:01:40.118 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:40.118 net, meter, ethdev, pci, cmdline, metrics, hash, timer, 00:01:40.118 acl, bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, 00:01:40.118 dmadev, efd, eventdev, dispatcher, gpudev, gro, gso, ip_frag, 00:01:40.118 jobstats, latencystats, lpm, member, pcapng, power, rawdev, regexdev, 00:01:40.118 mldev, rib, reorder, sched, security, stack, vhost, ipsec, 00:01:40.118 pdcp, fib, port, pdump, table, pipeline, graph, node, 00:01:40.118 00:01:40.118 00:01:40.118 Message: 00:01:40.118 =============== 00:01:40.118 Drivers Enabled 00:01:40.118 =============== 00:01:40.118 00:01:40.118 common: 00:01:40.118 00:01:40.118 bus: 00:01:40.118 pci, vdev, 00:01:40.118 mempool: 00:01:40.118 ring, 00:01:40.118 dma: 00:01:40.118 00:01:40.118 net: 00:01:40.118 i40e, 00:01:40.118 raw: 00:01:40.118 00:01:40.118 crypto: 00:01:40.118 00:01:40.118 compress: 00:01:40.118 00:01:40.118 regex: 00:01:40.118 00:01:40.118 ml: 00:01:40.118 00:01:40.118 vdpa: 00:01:40.118 00:01:40.118 event: 00:01:40.118 00:01:40.118 baseband: 00:01:40.118 00:01:40.118 gpu: 00:01:40.118 00:01:40.118 00:01:40.118 Message: 00:01:40.118 ================= 00:01:40.118 Content Skipped 00:01:40.118 ================= 00:01:40.118 00:01:40.118 apps: 00:01:40.118 00:01:40.118 libs: 00:01:40.118 00:01:40.118 drivers: 00:01:40.118 common/cpt: not in enabled drivers build config 00:01:40.118 common/dpaax: not in enabled drivers build config 00:01:40.118 common/iavf: not in enabled drivers build config 00:01:40.118 common/idpf: not in enabled drivers build config 00:01:40.118 common/mvep: not in enabled drivers build config 00:01:40.118 common/octeontx: not in enabled drivers build config 00:01:40.118 bus/auxiliary: not in enabled drivers build config 00:01:40.118 bus/cdx: not in enabled drivers build config 00:01:40.118 bus/dpaa: not in enabled drivers build config 00:01:40.118 bus/fslmc: not in enabled drivers build config 00:01:40.118 bus/ifpga: not in enabled drivers build config 00:01:40.118 bus/platform: not in enabled drivers build config 00:01:40.118 bus/vmbus: not in enabled drivers build config 00:01:40.118 common/cnxk: not in enabled drivers build config 00:01:40.118 common/mlx5: not in enabled drivers build config 00:01:40.118 common/nfp: not in enabled drivers build config 00:01:40.118 common/qat: not in enabled drivers build config 00:01:40.118 common/sfc_efx: not in enabled drivers build config 00:01:40.118 mempool/bucket: not in enabled drivers build config 00:01:40.118 mempool/cnxk: not in enabled drivers build config 00:01:40.118 mempool/dpaa: not in enabled drivers build config 00:01:40.118 mempool/dpaa2: not in enabled drivers build config 00:01:40.118 mempool/octeontx: not in enabled drivers build config 00:01:40.118 mempool/stack: not in enabled drivers build config 00:01:40.118 dma/cnxk: not in enabled drivers build config 00:01:40.118 dma/dpaa: not in enabled drivers build config 00:01:40.118 dma/dpaa2: not in enabled drivers build config 00:01:40.118 dma/hisilicon: not in enabled drivers build config 00:01:40.118 dma/idxd: not in enabled drivers build config 00:01:40.118 dma/ioat: not in enabled drivers build config 00:01:40.118 dma/skeleton: not in enabled drivers build config 00:01:40.118 net/af_packet: not in enabled drivers build config 00:01:40.118 net/af_xdp: not in enabled drivers build config 00:01:40.118 net/ark: not in enabled drivers build config 00:01:40.118 net/atlantic: not in enabled drivers build config 00:01:40.118 net/avp: not in enabled drivers build config 00:01:40.118 net/axgbe: not in enabled drivers build config 00:01:40.118 net/bnx2x: not in enabled drivers build config 00:01:40.118 net/bnxt: not in enabled drivers build config 00:01:40.118 net/bonding: not in enabled drivers build config 00:01:40.118 net/cnxk: not in enabled drivers build config 00:01:40.118 net/cpfl: not in enabled drivers build config 00:01:40.118 net/cxgbe: not in enabled drivers build config 00:01:40.118 net/dpaa: not in enabled drivers build config 00:01:40.118 net/dpaa2: not in enabled drivers build config 00:01:40.118 net/e1000: not in enabled drivers build config 00:01:40.118 net/ena: not in enabled drivers build config 00:01:40.118 net/enetc: not in enabled drivers build config 00:01:40.118 net/enetfec: not in enabled drivers build config 00:01:40.118 net/enic: not in enabled drivers build config 00:01:40.118 net/failsafe: not in enabled drivers build config 00:01:40.118 net/fm10k: not in enabled drivers build config 00:01:40.118 net/gve: not in enabled drivers build config 00:01:40.118 net/hinic: not in enabled drivers build config 00:01:40.118 net/hns3: not in enabled drivers build config 00:01:40.118 net/iavf: not in enabled drivers build config 00:01:40.118 net/ice: not in enabled drivers build config 00:01:40.118 net/idpf: not in enabled drivers build config 00:01:40.118 net/igc: not in enabled drivers build config 00:01:40.118 net/ionic: not in enabled drivers build config 00:01:40.118 net/ipn3ke: not in enabled drivers build config 00:01:40.118 net/ixgbe: not in enabled drivers build config 00:01:40.118 net/mana: not in enabled drivers build config 00:01:40.118 net/memif: not in enabled drivers build config 00:01:40.118 net/mlx4: not in enabled drivers build config 00:01:40.118 net/mlx5: not in enabled drivers build config 00:01:40.118 net/mvneta: not in enabled drivers build config 00:01:40.118 net/mvpp2: not in enabled drivers build config 00:01:40.118 net/netvsc: not in enabled drivers build config 00:01:40.118 net/nfb: not in enabled drivers build config 00:01:40.118 net/nfp: not in enabled drivers build config 00:01:40.118 net/ngbe: not in enabled drivers build config 00:01:40.118 net/null: not in enabled drivers build config 00:01:40.118 net/octeontx: not in enabled drivers build config 00:01:40.118 net/octeon_ep: not in enabled drivers build config 00:01:40.118 net/pcap: not in enabled drivers build config 00:01:40.118 net/pfe: not in enabled drivers build config 00:01:40.118 net/qede: not in enabled drivers build config 00:01:40.118 net/ring: not in enabled drivers build config 00:01:40.118 net/sfc: not in enabled drivers build config 00:01:40.118 net/softnic: not in enabled drivers build config 00:01:40.118 net/tap: not in enabled drivers build config 00:01:40.118 net/thunderx: not in enabled drivers build config 00:01:40.118 net/txgbe: not in enabled drivers build config 00:01:40.118 net/vdev_netvsc: not in enabled drivers build config 00:01:40.118 net/vhost: not in enabled drivers build config 00:01:40.118 net/virtio: not in enabled drivers build config 00:01:40.118 net/vmxnet3: not in enabled drivers build config 00:01:40.118 raw/cnxk_bphy: not in enabled drivers build config 00:01:40.118 raw/cnxk_gpio: not in enabled drivers build config 00:01:40.118 raw/dpaa2_cmdif: not in enabled drivers build config 00:01:40.118 raw/ifpga: not in enabled drivers build config 00:01:40.118 raw/ntb: not in enabled drivers build config 00:01:40.119 raw/skeleton: not in enabled drivers build config 00:01:40.119 crypto/armv8: not in enabled drivers build config 00:01:40.119 crypto/bcmfs: not in enabled drivers build config 00:01:40.119 crypto/caam_jr: not in enabled drivers build config 00:01:40.119 crypto/ccp: not in enabled drivers build config 00:01:40.119 crypto/cnxk: not in enabled drivers build config 00:01:40.119 crypto/dpaa_sec: not in enabled drivers build config 00:01:40.119 crypto/dpaa2_sec: not in enabled drivers build config 00:01:40.119 crypto/ipsec_mb: not in enabled drivers build config 00:01:40.119 crypto/mlx5: not in enabled drivers build config 00:01:40.119 crypto/mvsam: not in enabled drivers build config 00:01:40.119 crypto/nitrox: not in enabled drivers build config 00:01:40.119 crypto/null: not in enabled drivers build config 00:01:40.119 crypto/octeontx: not in enabled drivers build config 00:01:40.119 crypto/openssl: not in enabled drivers build config 00:01:40.119 crypto/scheduler: not in enabled drivers build config 00:01:40.119 crypto/uadk: not in enabled drivers build config 00:01:40.119 crypto/virtio: not in enabled drivers build config 00:01:40.119 compress/isal: not in enabled drivers build config 00:01:40.119 compress/mlx5: not in enabled drivers build config 00:01:40.119 compress/octeontx: not in enabled drivers build config 00:01:40.119 compress/zlib: not in enabled drivers build config 00:01:40.119 regex/mlx5: not in enabled drivers build config 00:01:40.119 regex/cn9k: not in enabled drivers build config 00:01:40.119 ml/cnxk: not in enabled drivers build config 00:01:40.119 vdpa/ifc: not in enabled drivers build config 00:01:40.119 vdpa/mlx5: not in enabled drivers build config 00:01:40.119 vdpa/nfp: not in enabled drivers build config 00:01:40.119 vdpa/sfc: not in enabled drivers build config 00:01:40.119 event/cnxk: not in enabled drivers build config 00:01:40.119 event/dlb2: not in enabled drivers build config 00:01:40.119 event/dpaa: not in enabled drivers build config 00:01:40.119 event/dpaa2: not in enabled drivers build config 00:01:40.119 event/dsw: not in enabled drivers build config 00:01:40.119 event/opdl: not in enabled drivers build config 00:01:40.119 event/skeleton: not in enabled drivers build config 00:01:40.119 event/sw: not in enabled drivers build config 00:01:40.119 event/octeontx: not in enabled drivers build config 00:01:40.119 baseband/acc: not in enabled drivers build config 00:01:40.119 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:01:40.119 baseband/fpga_lte_fec: not in enabled drivers build config 00:01:40.119 baseband/la12xx: not in enabled drivers build config 00:01:40.119 baseband/null: not in enabled drivers build config 00:01:40.119 baseband/turbo_sw: not in enabled drivers build config 00:01:40.119 gpu/cuda: not in enabled drivers build config 00:01:40.119 00:01:40.119 00:01:40.119 Build targets in project: 217 00:01:40.119 00:01:40.119 DPDK 23.11.0 00:01:40.119 00:01:40.119 User defined options 00:01:40.119 libdir : lib 00:01:40.119 prefix : /home/vagrant/spdk_repo/dpdk/build 00:01:40.119 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:01:40.119 c_link_args : 00:01:40.119 enable_docs : false 00:01:40.119 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm, 00:01:40.119 enable_kmods : false 00:01:40.119 machine : native 00:01:40.119 tests : false 00:01:40.119 00:01:40.119 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:40.119 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:01:40.119 18:32:40 build_native_dpdk -- common/autobuild_common.sh@199 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:01:40.119 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:01:40.119 [1/707] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:40.119 [2/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:40.119 [3/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:40.119 [4/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:40.379 [5/707] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:40.379 [6/707] Linking static target lib/librte_kvargs.a 00:01:40.379 [7/707] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:40.379 [8/707] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:40.379 [9/707] Linking static target lib/librte_log.a 00:01:40.379 [10/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:40.379 [11/707] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.379 [12/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:40.379 [13/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:40.638 [14/707] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:40.638 [15/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:40.638 [16/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:40.638 [17/707] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.638 [18/707] Linking target lib/librte_log.so.24.0 00:01:40.638 [19/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:40.897 [20/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:40.897 [21/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:40.897 [22/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:40.897 [23/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:40.897 [24/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:40.897 [25/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:41.156 [26/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:41.156 [27/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:41.156 [28/707] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:01:41.156 [29/707] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:41.156 [30/707] Linking static target lib/librte_telemetry.a 00:01:41.156 [31/707] Linking target lib/librte_kvargs.so.24.0 00:01:41.156 [32/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:41.156 [33/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:41.156 [34/707] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:01:41.156 [35/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:41.414 [36/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:41.414 [37/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:41.414 [38/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:41.414 [39/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:41.414 [40/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:41.414 [41/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:41.414 [42/707] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.414 [43/707] Linking target lib/librte_telemetry.so.24.0 00:01:41.414 [44/707] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:41.674 [45/707] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:01:41.674 [46/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:41.674 [47/707] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:41.674 [48/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:41.674 [49/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:41.674 [50/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:41.674 [51/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:41.674 [52/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:41.942 [53/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:41.942 [54/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:41.942 [55/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:41.942 [56/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:41.942 [57/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:41.942 [58/707] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:41.942 [59/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:41.942 [60/707] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:41.942 [61/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:41.942 [62/707] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:42.202 [63/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:42.202 [64/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:42.202 [65/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:42.202 [66/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:42.202 [67/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:42.202 [68/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:42.462 [69/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:42.462 [70/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:42.462 [71/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:42.462 [72/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:42.462 [73/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:42.462 [74/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:42.462 [75/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:42.462 [76/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:42.462 [77/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:42.462 [78/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:42.722 [79/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:42.722 [80/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:42.722 [81/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:42.722 [82/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:42.722 [83/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:42.981 [84/707] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:42.981 [85/707] Linking static target lib/librte_ring.a 00:01:42.981 [86/707] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:42.981 [87/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:42.981 [88/707] Linking static target lib/librte_eal.a 00:01:42.981 [89/707] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.981 [90/707] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:42.981 [91/707] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:42.982 [92/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:43.241 [93/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:43.241 [94/707] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:43.241 [95/707] Linking static target lib/librte_mempool.a 00:01:43.501 [96/707] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:43.501 [97/707] Linking static target lib/librte_rcu.a 00:01:43.501 [98/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:43.501 [99/707] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:43.501 [100/707] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:43.501 [101/707] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:43.501 [102/707] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:43.501 [103/707] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:43.501 [104/707] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:43.760 [105/707] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.760 [106/707] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.760 [107/707] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:43.760 [108/707] Linking static target lib/librte_net.a 00:01:43.760 [109/707] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:43.760 [110/707] Linking static target lib/librte_meter.a 00:01:43.760 [111/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:44.019 [112/707] Linking static target lib/librte_mbuf.a 00:01:44.019 [113/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:44.019 [114/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:44.019 [115/707] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.019 [116/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:44.019 [117/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:44.019 [118/707] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.279 [119/707] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.539 [120/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:44.539 [121/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:44.800 [122/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:44.800 [123/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:44.800 [124/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:44.800 [125/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:44.800 [126/707] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:44.800 [127/707] Linking static target lib/librte_pci.a 00:01:44.800 [128/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:44.800 [129/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:44.800 [130/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:45.058 [131/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:45.058 [132/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:45.058 [133/707] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.058 [134/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:45.059 [135/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:45.059 [136/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:45.059 [137/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:45.059 [138/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:45.059 [139/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:45.059 [140/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:45.318 [141/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:45.318 [142/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:45.318 [143/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:45.318 [144/707] Linking static target lib/librte_cmdline.a 00:01:45.318 [145/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:45.577 [146/707] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:01:45.577 [147/707] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:01:45.577 [148/707] Linking static target lib/librte_metrics.a 00:01:45.577 [149/707] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:45.577 [150/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:45.837 [151/707] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.097 [152/707] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:46.097 [153/707] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:46.097 [154/707] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.097 [155/707] Linking static target lib/librte_timer.a 00:01:46.097 [156/707] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:01:46.355 [157/707] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.355 [158/707] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:01:46.355 [159/707] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:01:46.355 [160/707] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:01:46.614 [161/707] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:01:46.614 [162/707] Linking static target lib/librte_bitratestats.a 00:01:46.874 [163/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:01:46.874 [164/707] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:01:46.874 [165/707] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.874 [166/707] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:01:46.874 [167/707] Linking static target lib/librte_bbdev.a 00:01:47.133 [168/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:01:47.392 [169/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:01:47.392 [170/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:01:47.392 [171/707] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:47.392 [172/707] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.392 [173/707] Linking static target lib/librte_hash.a 00:01:47.651 [174/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:01:47.651 [175/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:47.651 [176/707] Linking static target lib/librte_ethdev.a 00:01:47.651 [177/707] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:01:47.651 [178/707] Linking static target lib/acl/libavx2_tmp.a 00:01:47.651 [179/707] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.651 [180/707] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:01:47.911 [181/707] Linking target lib/librte_eal.so.24.0 00:01:47.911 [182/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:01:47.911 [183/707] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:01:47.911 [184/707] Linking target lib/librte_ring.so.24.0 00:01:47.911 [185/707] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.911 [186/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:01:47.911 [187/707] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:01:47.911 [188/707] Linking target lib/librte_meter.so.24.0 00:01:47.911 [189/707] Linking target lib/librte_pci.so.24.0 00:01:47.911 [190/707] Linking target lib/librte_timer.so.24.0 00:01:48.171 [191/707] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:01:48.171 [192/707] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:01:48.171 [193/707] Linking target lib/librte_rcu.so.24.0 00:01:48.171 [194/707] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:01:48.171 [195/707] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:01:48.171 [196/707] Linking static target lib/librte_cfgfile.a 00:01:48.171 [197/707] Linking target lib/librte_mempool.so.24.0 00:01:48.171 [198/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:01:48.171 [199/707] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:01:48.171 [200/707] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:01:48.171 [201/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:01:48.171 [202/707] Linking target lib/librte_mbuf.so.24.0 00:01:48.431 [203/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:01:48.431 [204/707] Linking static target lib/librte_bpf.a 00:01:48.431 [205/707] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:48.431 [206/707] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:01:48.431 [207/707] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.431 [208/707] Linking target lib/librte_net.so.24.0 00:01:48.431 [209/707] Linking target lib/librte_bbdev.so.24.0 00:01:48.431 [210/707] Linking target lib/librte_cfgfile.so.24.0 00:01:48.431 [211/707] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:01:48.431 [212/707] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:48.691 [213/707] Linking target lib/librte_cmdline.so.24.0 00:01:48.691 [214/707] Linking target lib/librte_hash.so.24.0 00:01:48.691 [215/707] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:48.691 [216/707] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.691 [217/707] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:01:48.691 [218/707] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:48.691 [219/707] Linking static target lib/librte_compressdev.a 00:01:48.691 [220/707] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:01:48.691 [221/707] Linking static target lib/librte_acl.a 00:01:48.691 [222/707] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:48.950 [223/707] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:01:48.950 [224/707] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:48.950 [225/707] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.950 [226/707] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:01:48.950 [227/707] Linking target lib/librte_acl.so.24.0 00:01:49.210 [228/707] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.210 [229/707] Linking target lib/librte_compressdev.so.24.0 00:01:49.210 [230/707] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:01:49.210 [231/707] Generating symbol file lib/librte_acl.so.24.0.p/librte_acl.so.24.0.symbols 00:01:49.210 [232/707] Linking static target lib/librte_distributor.a 00:01:49.210 [233/707] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:49.210 [234/707] Linking static target lib/librte_dmadev.a 00:01:49.210 [235/707] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:01:49.470 [236/707] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.470 [237/707] Linking target lib/librte_distributor.so.24.0 00:01:49.470 [238/707] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.729 [239/707] Linking target lib/librte_dmadev.so.24.0 00:01:49.729 [240/707] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:01:49.729 [241/707] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:01:49.729 [242/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:01:49.990 [243/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:01:49.990 [244/707] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:01:49.990 [245/707] Linking static target lib/librte_efd.a 00:01:50.250 [246/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:01:50.250 [247/707] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:50.250 [248/707] Linking static target lib/librte_cryptodev.a 00:01:50.250 [249/707] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.250 [250/707] Linking target lib/librte_efd.so.24.0 00:01:50.510 [251/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:01:50.510 [252/707] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:01:50.510 [253/707] Linking static target lib/librte_dispatcher.a 00:01:50.510 [254/707] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:01:50.510 [255/707] Linking static target lib/librte_gpudev.a 00:01:50.770 [256/707] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:01:50.770 [257/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:01:50.770 [258/707] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.770 [259/707] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:01:51.030 [260/707] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:01:51.030 [261/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:01:51.289 [262/707] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:01:51.289 [263/707] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.289 [264/707] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:01:51.289 [265/707] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:01:51.289 [266/707] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.289 [267/707] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:01:51.289 [268/707] Linking target lib/librte_cryptodev.so.24.0 00:01:51.289 [269/707] Linking static target lib/librte_gro.a 00:01:51.289 [270/707] Linking target lib/librte_gpudev.so.24.0 00:01:51.289 [271/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:01:51.289 [272/707] Linking static target lib/librte_eventdev.a 00:01:51.549 [273/707] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:01:51.549 [274/707] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:01:51.549 [275/707] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.549 [276/707] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:01:51.549 [277/707] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:01:51.549 [278/707] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:01:51.549 [279/707] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:01:51.549 [280/707] Linking static target lib/librte_gso.a 00:01:51.549 [281/707] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.809 [282/707] Linking target lib/librte_ethdev.so.24.0 00:01:51.809 [283/707] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.809 [284/707] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:01:51.809 [285/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:01:51.809 [286/707] Linking target lib/librte_metrics.so.24.0 00:01:51.809 [287/707] Linking target lib/librte_bpf.so.24.0 00:01:51.809 [288/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:01:51.809 [289/707] Linking target lib/librte_gro.so.24.0 00:01:52.069 [290/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:01:52.069 [291/707] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:01:52.069 [292/707] Linking static target lib/librte_jobstats.a 00:01:52.069 [293/707] Linking target lib/librte_gso.so.24.0 00:01:52.069 [294/707] Generating symbol file lib/librte_metrics.so.24.0.p/librte_metrics.so.24.0.symbols 00:01:52.069 [295/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:01:52.069 [296/707] Linking target lib/librte_bitratestats.so.24.0 00:01:52.069 [297/707] Generating symbol file lib/librte_bpf.so.24.0.p/librte_bpf.so.24.0.symbols 00:01:52.069 [298/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:01:52.069 [299/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:01:52.069 [300/707] Linking static target lib/librte_ip_frag.a 00:01:52.329 [301/707] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.329 [302/707] Linking target lib/librte_jobstats.so.24.0 00:01:52.329 [303/707] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:01:52.329 [304/707] Linking static target lib/librte_latencystats.a 00:01:52.329 [305/707] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.329 [306/707] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:01:52.329 [307/707] Linking target lib/librte_ip_frag.so.24.0 00:01:52.329 [308/707] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:01:52.329 [309/707] Linking static target lib/member/libsketch_avx512_tmp.a 00:01:52.591 [310/707] Generating symbol file lib/librte_ip_frag.so.24.0.p/librte_ip_frag.so.24.0.symbols 00:01:52.592 [311/707] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:01:52.592 [312/707] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.592 [313/707] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:52.592 [314/707] Linking target lib/librte_latencystats.so.24.0 00:01:52.592 [315/707] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:52.592 [316/707] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:52.853 [317/707] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:01:52.853 [318/707] Linking static target lib/librte_lpm.a 00:01:52.853 [319/707] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:01:52.853 [320/707] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:52.853 [321/707] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:53.113 [322/707] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:01:53.113 [323/707] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:53.113 [324/707] Linking static target lib/librte_pcapng.a 00:01:53.113 [325/707] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:01:53.113 [326/707] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.113 [327/707] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.113 [328/707] Linking target lib/librte_lpm.so.24.0 00:01:53.113 [329/707] Linking target lib/librte_eventdev.so.24.0 00:01:53.113 [330/707] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:53.113 [331/707] Generating symbol file lib/librte_lpm.so.24.0.p/librte_lpm.so.24.0.symbols 00:01:53.113 [332/707] Generating symbol file lib/librte_eventdev.so.24.0.p/librte_eventdev.so.24.0.symbols 00:01:53.113 [333/707] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.114 [334/707] Linking target lib/librte_dispatcher.so.24.0 00:01:53.374 [335/707] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:53.374 [336/707] Linking target lib/librte_pcapng.so.24.0 00:01:53.374 [337/707] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:53.374 [338/707] Generating symbol file lib/librte_pcapng.so.24.0.p/librte_pcapng.so.24.0.symbols 00:01:53.374 [339/707] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:53.633 [340/707] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:01:53.633 [341/707] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:53.633 [342/707] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:01:53.633 [343/707] Linking static target lib/librte_rawdev.a 00:01:53.633 [344/707] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:01:53.633 [345/707] Linking static target lib/librte_power.a 00:01:53.633 [346/707] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:01:53.633 [347/707] Linking static target lib/librte_regexdev.a 00:01:53.633 [348/707] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:01:53.633 [349/707] Linking static target lib/librte_member.a 00:01:53.633 [350/707] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:01:53.633 [351/707] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:01:53.892 [352/707] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:01:53.892 [353/707] Linking static target lib/librte_mldev.a 00:01:53.892 [354/707] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.892 [355/707] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.892 [356/707] Linking target lib/librte_member.so.24.0 00:01:53.892 [357/707] Linking target lib/librte_rawdev.so.24.0 00:01:53.892 [358/707] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:01:54.150 [359/707] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:01:54.150 [360/707] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.150 [361/707] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:01:54.150 [362/707] Linking target lib/librte_power.so.24.0 00:01:54.150 [363/707] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:54.150 [364/707] Linking static target lib/librte_reorder.a 00:01:54.150 [365/707] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.150 [366/707] Linking target lib/librte_regexdev.so.24.0 00:01:54.150 [367/707] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:01:54.150 [368/707] Linking static target lib/librte_rib.a 00:01:54.150 [369/707] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:01:54.409 [370/707] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.409 [371/707] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:54.409 [372/707] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:01:54.409 [373/707] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:01:54.409 [374/707] Linking target lib/librte_reorder.so.24.0 00:01:54.409 [375/707] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:01:54.409 [376/707] Linking static target lib/librte_stack.a 00:01:54.409 [377/707] Generating symbol file lib/librte_reorder.so.24.0.p/librte_reorder.so.24.0.symbols 00:01:54.409 [378/707] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:54.409 [379/707] Linking static target lib/librte_security.a 00:01:54.668 [380/707] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.668 [381/707] Linking target lib/librte_stack.so.24.0 00:01:54.668 [382/707] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.668 [383/707] Linking target lib/librte_rib.so.24.0 00:01:54.668 [384/707] Generating symbol file lib/librte_rib.so.24.0.p/librte_rib.so.24.0.symbols 00:01:54.668 [385/707] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:54.668 [386/707] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.926 [387/707] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:54.926 [388/707] Linking target lib/librte_mldev.so.24.0 00:01:54.926 [389/707] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.926 [390/707] Linking target lib/librte_security.so.24.0 00:01:54.926 [391/707] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:54.926 [392/707] Generating symbol file lib/librte_security.so.24.0.p/librte_security.so.24.0.symbols 00:01:55.184 [393/707] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:01:55.184 [394/707] Linking static target lib/librte_sched.a 00:01:55.184 [395/707] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:55.184 [396/707] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:55.443 [397/707] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.443 [398/707] Linking target lib/librte_sched.so.24.0 00:01:55.443 [399/707] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:55.443 [400/707] Generating symbol file lib/librte_sched.so.24.0.p/librte_sched.so.24.0.symbols 00:01:55.443 [401/707] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:01:55.701 [402/707] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:01:55.701 [403/707] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:55.960 [404/707] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:01:55.960 [405/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:01:55.960 [406/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:01:55.960 [407/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:01:56.218 [408/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:01:56.218 [409/707] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:01:56.218 [410/707] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:01:56.218 [411/707] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:01:56.219 [412/707] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:01:56.219 [413/707] Linking static target lib/librte_ipsec.a 00:01:56.219 [414/707] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:01:56.477 [415/707] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:01:56.477 [416/707] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.477 [417/707] Linking target lib/librte_ipsec.so.24.0 00:01:56.736 [418/707] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:01:56.736 [419/707] Generating symbol file lib/librte_ipsec.so.24.0.p/librte_ipsec.so.24.0.symbols 00:01:56.736 [420/707] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:01:56.995 [421/707] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:01:56.995 [422/707] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:01:56.995 [423/707] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:01:56.995 [424/707] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:01:56.995 [425/707] Linking static target lib/librte_fib.a 00:01:57.254 [426/707] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:01:57.254 [427/707] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:01:57.254 [428/707] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:01:57.254 [429/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:01:57.254 [430/707] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.254 [431/707] Linking static target lib/librte_pdcp.a 00:01:57.254 [432/707] Linking target lib/librte_fib.so.24.0 00:01:57.514 [433/707] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.514 [434/707] Linking target lib/librte_pdcp.so.24.0 00:01:57.772 [435/707] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:01:57.772 [436/707] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:01:57.772 [437/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:01:57.772 [438/707] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:01:57.772 [439/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:01:58.031 [440/707] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:01:58.031 [441/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:01:58.290 [442/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:01:58.290 [443/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:01:58.290 [444/707] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:01:58.290 [445/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:01:58.290 [446/707] Linking static target lib/librte_port.a 00:01:58.290 [447/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:01:58.290 [448/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:01:58.549 [449/707] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:01:58.549 [450/707] Linking static target lib/librte_pdump.a 00:01:58.549 [451/707] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:01:58.549 [452/707] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:01:58.808 [453/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:01:58.808 [454/707] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.808 [455/707] Linking target lib/librte_port.so.24.0 00:01:58.808 [456/707] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.808 [457/707] Linking target lib/librte_pdump.so.24.0 00:01:58.808 [458/707] Generating symbol file lib/librte_port.so.24.0.p/librte_port.so.24.0.symbols 00:01:59.067 [459/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:01:59.067 [460/707] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:01:59.067 [461/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:01:59.326 [462/707] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:01:59.326 [463/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:01:59.326 [464/707] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:01:59.585 [465/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:01:59.585 [466/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:01:59.585 [467/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:01:59.585 [468/707] Linking static target lib/librte_table.a 00:01:59.845 [469/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:02:00.104 [470/707] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:02:00.104 [471/707] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:00.104 [472/707] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.363 [473/707] Linking target lib/librte_table.so.24.0 00:02:00.363 [474/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:02:00.363 [475/707] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:02:00.363 [476/707] Generating symbol file lib/librte_table.so.24.0.p/librte_table.so.24.0.symbols 00:02:00.363 [477/707] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:02:00.363 [478/707] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:02:00.931 [479/707] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:02:00.931 [480/707] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:02:00.931 [481/707] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:02:00.931 [482/707] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:02:00.931 [483/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:02:01.189 [484/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:02:01.189 [485/707] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:02:01.190 [486/707] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:02:01.190 [487/707] Linking static target lib/librte_graph.a 00:02:01.190 [488/707] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:02:01.449 [489/707] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:02:01.449 [490/707] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:02:01.707 [491/707] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:02:01.707 [492/707] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.707 [493/707] Linking target lib/librte_graph.so.24.0 00:02:01.707 [494/707] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:02:01.966 [495/707] Generating symbol file lib/librte_graph.so.24.0.p/librte_graph.so.24.0.symbols 00:02:01.966 [496/707] Compiling C object lib/librte_node.a.p/node_null.c.o 00:02:01.966 [497/707] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:02:01.966 [498/707] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:02:02.224 [499/707] Compiling C object lib/librte_node.a.p/node_log.c.o 00:02:02.224 [500/707] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:02:02.224 [501/707] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:02:02.224 [502/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:02.224 [503/707] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:02:02.224 [504/707] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:02:02.483 [505/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:02.483 [506/707] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:02:02.483 [507/707] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:02:02.742 [508/707] Linking static target lib/librte_node.a 00:02:02.742 [509/707] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:02.742 [510/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:02.742 [511/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:02.742 [512/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:02.742 [513/707] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.000 [514/707] Linking target lib/librte_node.so.24.0 00:02:03.000 [515/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:03.000 [516/707] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:03.000 [517/707] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:03.000 [518/707] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:03.259 [519/707] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:03.259 [520/707] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:03.259 [521/707] Linking static target drivers/librte_bus_pci.a 00:02:03.259 [522/707] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:03.259 [523/707] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:03.259 [524/707] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:03.259 [525/707] Linking static target drivers/librte_bus_vdev.a 00:02:03.259 [526/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:02:03.259 [527/707] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:03.259 [528/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:02:03.259 [529/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:02:03.517 [530/707] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.517 [531/707] Linking target drivers/librte_bus_vdev.so.24.0 00:02:03.517 [532/707] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.517 [533/707] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:03.517 [534/707] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:03.517 [535/707] Linking target drivers/librte_bus_pci.so.24.0 00:02:03.517 [536/707] Generating symbol file drivers/librte_bus_vdev.so.24.0.p/librte_bus_vdev.so.24.0.symbols 00:02:03.776 [537/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:02:03.776 [538/707] Generating symbol file drivers/librte_bus_pci.so.24.0.p/librte_bus_pci.so.24.0.symbols 00:02:03.776 [539/707] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:03.777 [540/707] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:03.777 [541/707] Linking static target drivers/librte_mempool_ring.a 00:02:03.777 [542/707] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:03.777 [543/707] Linking target drivers/librte_mempool_ring.so.24.0 00:02:04.035 [544/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:02:04.294 [545/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:02:04.294 [546/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:02:04.294 [547/707] Linking static target drivers/net/i40e/base/libi40e_base.a 00:02:04.860 [548/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:02:04.860 [549/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:02:05.118 [550/707] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:02:05.118 [551/707] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:02:05.376 [552/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:02:05.376 [553/707] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:02:05.376 [554/707] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:02:05.376 [555/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:02:05.376 [556/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:02:05.635 [557/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:02:05.635 [558/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:02:05.893 [559/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:02:05.893 [560/707] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:02:06.153 [561/707] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:02:06.153 [562/707] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:02:06.153 [563/707] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:02:06.411 [564/707] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:02:06.412 [565/707] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:02:06.671 [566/707] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:02:06.671 [567/707] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:02:06.671 [568/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:02:06.671 [569/707] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:02:06.671 [570/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:02:06.930 [571/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:02:06.930 [572/707] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:02:06.930 [573/707] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:02:06.930 [574/707] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:02:06.930 [575/707] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:02:07.189 [576/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:02:07.189 [577/707] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:02:07.448 [578/707] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:02:07.448 [579/707] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:02:07.707 [580/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:02:07.707 [581/707] Linking static target drivers/libtmp_rte_net_i40e.a 00:02:07.707 [582/707] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:02:07.707 [583/707] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:02:07.707 [584/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:02:07.965 [585/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:02:07.965 [586/707] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:02:07.965 [587/707] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:07.965 [588/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:02:07.965 [589/707] Compiling C object drivers/librte_net_i40e.so.24.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:07.965 [590/707] Linking static target drivers/librte_net_i40e.a 00:02:08.232 [591/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:02:08.493 [592/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:02:08.493 [593/707] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.493 [594/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:02:08.493 [595/707] Linking target drivers/librte_net_i40e.so.24.0 00:02:08.493 [596/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:02:08.752 [597/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:02:08.752 [598/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:02:09.011 [599/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:02:09.011 [600/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:02:09.011 [601/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:02:09.270 [602/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:02:09.270 [603/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:02:09.270 [604/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:02:09.529 [605/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:02:09.529 [606/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:02:09.529 [607/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:02:09.529 [608/707] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:02:09.529 [609/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:02:09.529 [610/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:02:09.794 [611/707] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:09.794 [612/707] Linking static target lib/librte_vhost.a 00:02:09.794 [613/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:02:09.794 [614/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:02:10.061 [615/707] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:02:10.321 [616/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:02:10.321 [617/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:02:10.321 [618/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:02:10.580 [619/707] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.839 [620/707] Linking target lib/librte_vhost.so.24.0 00:02:11.097 [621/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:02:11.097 [622/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:02:11.097 [623/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:02:11.357 [624/707] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:02:11.357 [625/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:02:11.357 [626/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:02:11.357 [627/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:02:11.357 [628/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:02:11.357 [629/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:02:11.615 [630/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:02:11.615 [631/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:02:11.615 [632/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:02:11.615 [633/707] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:02:11.874 [634/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:02:11.874 [635/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:02:11.874 [636/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:02:12.133 [637/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:02:12.133 [638/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:02:12.133 [639/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:02:12.133 [640/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:02:12.133 [641/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:02:12.391 [642/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:02:12.391 [643/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:02:12.391 [644/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:02:12.391 [645/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:02:12.650 [646/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:02:12.650 [647/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:02:12.650 [648/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:02:12.908 [649/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:02:12.908 [650/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:02:12.908 [651/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:02:13.167 [652/707] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:02:13.167 [653/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:02:13.167 [654/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:02:13.167 [655/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:02:13.167 [656/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:02:13.167 [657/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:02:13.426 [658/707] Linking static target lib/librte_pipeline.a 00:02:13.426 [659/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:02:13.426 [660/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:02:13.685 [661/707] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:02:13.685 [662/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:02:13.943 [663/707] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:02:13.943 [664/707] Linking target app/dpdk-dumpcap 00:02:13.943 [665/707] Linking target app/dpdk-graph 00:02:14.202 [666/707] Linking target app/dpdk-pdump 00:02:14.202 [667/707] Linking target app/dpdk-proc-info 00:02:14.202 [668/707] Linking target app/dpdk-test-acl 00:02:14.202 [669/707] Linking target app/dpdk-test-cmdline 00:02:14.202 [670/707] Linking target app/dpdk-test-bbdev 00:02:14.461 [671/707] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:02:14.461 [672/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:02:14.461 [673/707] Linking target app/dpdk-test-compress-perf 00:02:14.461 [674/707] Linking target app/dpdk-test-crypto-perf 00:02:14.719 [675/707] Linking target app/dpdk-test-dma-perf 00:02:14.719 [676/707] Linking target app/dpdk-test-eventdev 00:02:14.719 [677/707] Linking target app/dpdk-test-fib 00:02:14.719 [678/707] Linking target app/dpdk-test-flow-perf 00:02:14.978 [679/707] Linking target app/dpdk-test-gpudev 00:02:14.978 [680/707] Linking target app/dpdk-test-mldev 00:02:14.978 [681/707] Linking target app/dpdk-test-pipeline 00:02:14.978 [682/707] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:02:14.978 [683/707] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:02:15.237 [684/707] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:02:15.237 [685/707] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:02:15.237 [686/707] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:02:15.495 [687/707] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.495 [688/707] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:02:15.495 [689/707] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:02:15.495 [690/707] Linking target lib/librte_pipeline.so.24.0 00:02:15.495 [691/707] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:02:15.753 [692/707] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:02:15.753 [693/707] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:02:16.011 [694/707] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:02:16.011 [695/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:02:16.011 [696/707] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:02:16.269 [697/707] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:02:16.269 [698/707] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:02:16.269 [699/707] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:02:16.528 [700/707] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:02:16.528 [701/707] Linking target app/dpdk-test-sad 00:02:16.787 [702/707] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:02:16.787 [703/707] Linking target app/dpdk-test-regex 00:02:16.787 [704/707] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:02:16.787 [705/707] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:02:17.356 [706/707] Linking target app/dpdk-test-security-perf 00:02:17.356 [707/707] Linking target app/dpdk-testpmd 00:02:17.356 18:33:17 build_native_dpdk -- common/autobuild_common.sh@201 -- $ uname -s 00:02:17.356 18:33:17 build_native_dpdk -- common/autobuild_common.sh@201 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:02:17.356 18:33:17 build_native_dpdk -- common/autobuild_common.sh@214 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:02:17.356 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:17.356 [0/1] Installing files. 00:02:17.618 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:02:17.618 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:02:17.618 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:02:17.618 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:02:17.618 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:02:17.618 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:02:17.618 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:17.618 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:17.618 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:17.618 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:17.618 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:17.618 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:17.618 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:17.618 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:17.618 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:17.618 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:17.618 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:17.618 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:02:17.618 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:02:17.618 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:02:17.618 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:02:17.618 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:02:17.618 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:02:17.618 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:02:17.618 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:02:17.618 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:02:17.618 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:17.618 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:17.619 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:17.619 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:17.619 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:17.619 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:17.619 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:17.619 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:17.619 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:17.619 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:17.619 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:17.619 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:17.619 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:17.619 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:17.619 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:17.619 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:17.619 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:17.619 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:17.619 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:17.619 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:17.619 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:17.619 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:17.619 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:17.619 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:17.619 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:17.619 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:17.619 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:17.619 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:17.619 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:02:17.619 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:02:17.619 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:02:17.619 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:02:17.619 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:02:17.619 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:17.619 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:17.619 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:17.619 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:17.619 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:17.619 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:17.619 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:17.619 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:17.619 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:17.619 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:17.619 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:17.619 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:17.619 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:17.619 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:17.619 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:17.619 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:17.619 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:17.619 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:17.619 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:17.619 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:17.619 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:17.619 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:17.619 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:17.619 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:17.619 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:17.619 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:17.619 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:17.619 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:17.619 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:17.619 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:17.619 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:17.619 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:17.619 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:17.619 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:17.619 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:17.619 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:17.619 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:17.619 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:17.619 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:17.619 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:17.619 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:17.619 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:17.619 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:17.619 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:17.619 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:17.619 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:17.619 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:17.619 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:17.619 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:17.619 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:17.619 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:17.619 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:17.619 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:17.619 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:17.619 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:17.620 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:17.620 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:17.620 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:17.620 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:17.620 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:17.620 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:17.620 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:17.620 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:17.620 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:17.620 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:17.620 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:17.620 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:17.620 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:17.620 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:17.620 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:17.620 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:17.620 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:17.620 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:17.620 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:17.620 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:17.620 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:17.620 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:17.620 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:17.620 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:17.620 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:17.620 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:17.620 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:17.620 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:17.620 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:17.620 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:17.620 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:17.620 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:17.620 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:17.620 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:17.620 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:17.620 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:17.620 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:17.620 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:17.620 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:17.620 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:17.620 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:17.620 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:17.620 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:17.620 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:17.620 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:17.620 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:17.620 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:17.620 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:17.620 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:17.620 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:17.620 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:17.620 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:17.620 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:17.620 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:17.620 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:17.620 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:17.620 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:17.620 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:17.620 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:17.620 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:17.620 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:17.620 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:17.620 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:17.620 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:17.620 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:02:17.620 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:02:17.620 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:02:17.620 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:02:17.620 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:17.620 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:17.620 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:17.620 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:17.620 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:17.620 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:17.620 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:17.620 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:17.620 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:17.620 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:17.620 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:17.620 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:17.620 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:17.620 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:17.620 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:17.620 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:17.620 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:17.620 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:17.620 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:17.620 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:17.621 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:17.621 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:17.621 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:17.621 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:17.621 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:17.621 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:17.621 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:17.621 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:17.621 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:17.621 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:17.621 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:17.621 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:17.621 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:17.621 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:17.621 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:17.621 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:17.621 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:17.621 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:17.621 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:17.621 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:17.621 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:17.621 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:17.621 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:02:17.621 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:02:17.621 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:17.621 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:17.621 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:17.621 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:17.621 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:17.621 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:17.621 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:17.621 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:17.621 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:02:17.621 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:17.621 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:17.621 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:17.621 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:17.621 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:17.621 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:17.621 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:17.621 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:17.621 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:17.621 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:17.621 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:17.621 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:02:17.621 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:02:17.621 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:02:17.621 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:02:17.621 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:02:17.621 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:17.621 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:17.621 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:17.621 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:17.621 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:17.621 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:17.621 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:17.621 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:17.621 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:17.621 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:17.621 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:17.621 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:17.621 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:17.621 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:17.621 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:17.621 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:17.621 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:17.621 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:17.621 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:17.621 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:17.621 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:17.621 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec_sa.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:17.621 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:17.621 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:17.621 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:17.621 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:17.621 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:17.621 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:17.621 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:17.621 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:17.621 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:17.621 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:17.621 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:17.621 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:17.621 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:17.622 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:17.622 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:17.622 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:17.622 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:17.622 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:17.622 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:17.622 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:17.622 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:17.622 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:17.622 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:17.622 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:17.622 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:17.622 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:17.622 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:17.622 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:17.622 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:17.622 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:17.622 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:02:17.622 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:02:17.622 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:17.622 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:17.622 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:17.622 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:17.622 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:17.622 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:17.622 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:17.622 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:17.622 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:17.622 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:17.622 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:17.622 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:17.622 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:17.622 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:17.622 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:17.622 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:17.622 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:17.622 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:17.622 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:17.622 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:17.622 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:17.622 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:02:17.622 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:02:17.622 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:02:17.622 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:17.622 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:17.622 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:17.622 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:17.622 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:17.622 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:17.622 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:02:17.622 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:02:17.622 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:02:17.622 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:02:17.622 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:02:17.622 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:02:17.622 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:02:17.622 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:02:17.622 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:02:17.622 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:02:17.622 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:02:17.622 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:17.622 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:17.622 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:17.622 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:17.622 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:17.622 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:17.622 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:17.622 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:17.622 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:17.622 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:17.622 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:17.622 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:17.622 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:17.622 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:17.622 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:17.622 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:17.622 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:17.622 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:17.622 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:17.622 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:17.622 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:17.622 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:17.622 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:17.622 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:17.622 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:17.622 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:17.622 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:17.622 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:17.623 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:17.623 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:17.623 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:17.623 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:17.623 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:17.623 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:02:17.623 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:02:17.623 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:17.623 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:17.623 Installing lib/librte_log.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.623 Installing lib/librte_log.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.882 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.882 Installing lib/librte_kvargs.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.882 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.882 Installing lib/librte_telemetry.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.882 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.882 Installing lib/librte_eal.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.882 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.882 Installing lib/librte_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.882 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.882 Installing lib/librte_rcu.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.882 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.882 Installing lib/librte_mempool.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.882 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.882 Installing lib/librte_mbuf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.882 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.882 Installing lib/librte_net.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.882 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.882 Installing lib/librte_meter.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.882 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.882 Installing lib/librte_ethdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.882 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.882 Installing lib/librte_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.882 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.882 Installing lib/librte_cmdline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.882 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.882 Installing lib/librte_metrics.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.882 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.882 Installing lib/librte_hash.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.882 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.882 Installing lib/librte_timer.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.882 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.882 Installing lib/librte_acl.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.882 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.882 Installing lib/librte_bbdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.882 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.882 Installing lib/librte_bitratestats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.882 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.882 Installing lib/librte_bpf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.882 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.882 Installing lib/librte_cfgfile.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.882 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.882 Installing lib/librte_compressdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.882 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.882 Installing lib/librte_cryptodev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.882 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.882 Installing lib/librte_distributor.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.882 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.882 Installing lib/librte_dmadev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.882 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.882 Installing lib/librte_efd.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.882 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.882 Installing lib/librte_eventdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.882 Installing lib/librte_dispatcher.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.882 Installing lib/librte_dispatcher.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.882 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.882 Installing lib/librte_gpudev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.882 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.882 Installing lib/librte_gro.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.882 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.882 Installing lib/librte_gso.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.882 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.882 Installing lib/librte_ip_frag.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.882 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.882 Installing lib/librte_jobstats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.882 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.882 Installing lib/librte_latencystats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.882 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.882 Installing lib/librte_lpm.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.882 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.882 Installing lib/librte_member.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.882 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.882 Installing lib/librte_pcapng.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.882 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.882 Installing lib/librte_power.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.882 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.882 Installing lib/librte_rawdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.882 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.882 Installing lib/librte_regexdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.882 Installing lib/librte_mldev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.882 Installing lib/librte_mldev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.882 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.882 Installing lib/librte_rib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.882 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.882 Installing lib/librte_reorder.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.882 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.882 Installing lib/librte_sched.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.882 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.882 Installing lib/librte_security.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.882 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.883 Installing lib/librte_stack.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.883 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.883 Installing lib/librte_vhost.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.883 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.883 Installing lib/librte_ipsec.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.883 Installing lib/librte_pdcp.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.883 Installing lib/librte_pdcp.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.883 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.883 Installing lib/librte_fib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.883 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.883 Installing lib/librte_port.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.883 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.883 Installing lib/librte_pdump.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.883 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.883 Installing lib/librte_table.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.883 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.883 Installing lib/librte_pipeline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.883 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.883 Installing lib/librte_graph.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:18.145 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:18.145 Installing lib/librte_node.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:18.145 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:18.145 Installing drivers/librte_bus_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:02:18.145 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:18.145 Installing drivers/librte_bus_vdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:02:18.145 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:18.145 Installing drivers/librte_mempool_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:02:18.145 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:18.145 Installing drivers/librte_net_i40e.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:02:18.145 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:18.145 Installing app/dpdk-graph to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:18.145 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:18.145 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:18.145 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:18.145 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:18.145 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:18.145 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:18.145 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:18.145 Installing app/dpdk-test-dma-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:18.145 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:18.145 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:18.145 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:18.145 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:18.145 Installing app/dpdk-test-mldev to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:18.145 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:18.145 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:18.145 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:18.145 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:18.145 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:18.145 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.145 Installing /home/vagrant/spdk_repo/dpdk/lib/log/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.145 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.145 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.145 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:18.145 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:18.145 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:18.145 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:18.145 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:18.145 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:18.145 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:18.145 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:18.145 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:18.145 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:18.145 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:18.145 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:18.145 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.145 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.145 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.145 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.145 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.145 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.145 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.145 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.145 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.145 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.145 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.145 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.145 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.145 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.145 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.145 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.145 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.145 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.145 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.145 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.145 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.145 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.145 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.145 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.145 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.145 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.145 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.145 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.146 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.146 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.146 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.146 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.146 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.146 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.146 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.146 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.146 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.146 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.146 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.146 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.146 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lock_annotations.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.146 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.146 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.146 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.146 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.146 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.146 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.146 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.146 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.146 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.146 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.146 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.146 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.146 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.146 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.146 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_stdatomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.146 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.146 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.146 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.146 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.146 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.146 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.146 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.146 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.146 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.146 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.146 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.146 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.146 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.146 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.146 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.146 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.146 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.146 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.146 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.146 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.146 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.146 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.146 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.146 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.146 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.146 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.146 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.146 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.146 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.146 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.146 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.146 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.146 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.146 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.146 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.146 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.146 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.146 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_dtls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.146 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.146 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.146 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.146 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.146 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.146 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.146 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.146 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.146 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.146 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.146 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.146 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.146 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.146 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.146 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_pdcp_hdr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.146 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.146 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.146 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.146 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.146 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.146 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.146 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.146 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.146 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.146 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.146 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.146 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.146 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.146 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.146 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.146 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.146 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.146 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.146 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.146 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.146 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.146 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.146 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.146 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.146 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.146 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.146 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.146 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.147 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.147 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.147 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.147 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.147 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.147 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.147 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.147 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.147 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.147 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.147 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.147 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.147 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.147 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.147 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.147 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.147 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.147 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.147 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.147 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.147 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.147 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.147 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.147 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.147 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.147 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.147 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.147 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.147 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.147 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.147 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.147 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.147 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.147 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.147 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.147 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.147 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.147 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.147 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_dma_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.147 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.147 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.147 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.147 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.147 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.147 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.147 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.147 Installing /home/vagrant/spdk_repo/dpdk/lib/dispatcher/rte_dispatcher.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.147 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.147 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.147 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.147 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.147 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.147 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.147 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.147 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.147 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.147 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.147 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.147 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.147 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.147 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.147 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.147 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.147 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.147 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.147 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.147 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.147 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.147 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.147 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.147 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.147 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.147 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.147 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.147 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.147 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.147 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.147 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.147 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.147 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.147 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.147 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.147 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.147 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.147 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.147 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.147 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.147 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.147 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.147 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.147 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.147 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.147 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.147 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.147 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.147 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.147 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.147 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.147 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.147 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.147 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.147 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.147 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.147 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.147 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.147 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.147 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.148 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.148 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.148 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.148 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.148 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.148 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.148 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.148 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.148 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.148 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.148 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.148 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.148 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.148 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.148 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.148 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.148 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.148 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.148 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.148 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.148 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.148 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.148 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.148 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.148 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.148 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.148 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.148 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.148 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.148 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.148 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.148 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.148 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.148 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.148 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.148 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.148 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.148 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.148 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.148 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_rtc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.148 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.148 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.148 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.148 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip6_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.148 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_udp4_input_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.148 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.148 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.148 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.148 Installing /home/vagrant/spdk_repo/dpdk/buildtools/dpdk-cmdline-gen.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:18.148 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:18.148 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:18.148 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:18.148 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:18.148 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-rss-flows.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:18.148 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:18.148 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:02:18.148 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:02:18.148 Installing symlink pointing to librte_log.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so.24 00:02:18.148 Installing symlink pointing to librte_log.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so 00:02:18.148 Installing symlink pointing to librte_kvargs.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.24 00:02:18.148 Installing symlink pointing to librte_kvargs.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:02:18.148 Installing symlink pointing to librte_telemetry.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.24 00:02:18.148 Installing symlink pointing to librte_telemetry.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:02:18.148 Installing symlink pointing to librte_eal.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.24 00:02:18.148 Installing symlink pointing to librte_eal.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:02:18.148 Installing symlink pointing to librte_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.24 00:02:18.148 Installing symlink pointing to librte_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:02:18.148 Installing symlink pointing to librte_rcu.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.24 00:02:18.148 Installing symlink pointing to librte_rcu.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:02:18.148 Installing symlink pointing to librte_mempool.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.24 00:02:18.148 Installing symlink pointing to librte_mempool.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:02:18.148 Installing symlink pointing to librte_mbuf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.24 00:02:18.148 Installing symlink pointing to librte_mbuf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:02:18.148 Installing symlink pointing to librte_net.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.24 00:02:18.148 Installing symlink pointing to librte_net.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:02:18.148 Installing symlink pointing to librte_meter.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.24 00:02:18.148 Installing symlink pointing to librte_meter.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:02:18.148 Installing symlink pointing to librte_ethdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.24 00:02:18.148 Installing symlink pointing to librte_ethdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:02:18.148 Installing symlink pointing to librte_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.24 00:02:18.148 Installing symlink pointing to librte_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:02:18.148 Installing symlink pointing to librte_cmdline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.24 00:02:18.148 Installing symlink pointing to librte_cmdline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:02:18.148 Installing symlink pointing to librte_metrics.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.24 00:02:18.148 Installing symlink pointing to librte_metrics.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:02:18.148 Installing symlink pointing to librte_hash.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.24 00:02:18.148 Installing symlink pointing to librte_hash.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:02:18.148 Installing symlink pointing to librte_timer.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.24 00:02:18.148 Installing symlink pointing to librte_timer.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:02:18.148 Installing symlink pointing to librte_acl.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.24 00:02:18.148 Installing symlink pointing to librte_acl.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:02:18.148 Installing symlink pointing to librte_bbdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.24 00:02:18.148 Installing symlink pointing to librte_bbdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:02:18.148 Installing symlink pointing to librte_bitratestats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.24 00:02:18.148 Installing symlink pointing to librte_bitratestats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:02:18.148 Installing symlink pointing to librte_bpf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.24 00:02:18.148 Installing symlink pointing to librte_bpf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:02:18.148 Installing symlink pointing to librte_cfgfile.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.24 00:02:18.148 Installing symlink pointing to librte_cfgfile.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:02:18.148 Installing symlink pointing to librte_compressdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.24 00:02:18.148 Installing symlink pointing to librte_compressdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:02:18.148 Installing symlink pointing to librte_cryptodev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.24 00:02:18.148 Installing symlink pointing to librte_cryptodev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:02:18.148 Installing symlink pointing to librte_distributor.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.24 00:02:18.148 Installing symlink pointing to librte_distributor.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:02:18.148 Installing symlink pointing to librte_dmadev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.24 00:02:18.148 Installing symlink pointing to librte_dmadev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:02:18.148 Installing symlink pointing to librte_efd.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.24 00:02:18.149 Installing symlink pointing to librte_efd.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:02:18.149 Installing symlink pointing to librte_eventdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.24 00:02:18.149 Installing symlink pointing to librte_eventdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:02:18.149 Installing symlink pointing to librte_dispatcher.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so.24 00:02:18.149 Installing symlink pointing to librte_dispatcher.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so 00:02:18.149 Installing symlink pointing to librte_gpudev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.24 00:02:18.149 Installing symlink pointing to librte_gpudev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:02:18.149 Installing symlink pointing to librte_gro.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.24 00:02:18.149 Installing symlink pointing to librte_gro.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:02:18.149 Installing symlink pointing to librte_gso.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.24 00:02:18.149 Installing symlink pointing to librte_gso.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:02:18.149 Installing symlink pointing to librte_ip_frag.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.24 00:02:18.149 Installing symlink pointing to librte_ip_frag.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:02:18.149 Installing symlink pointing to librte_jobstats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.24 00:02:18.149 Installing symlink pointing to librte_jobstats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:02:18.149 Installing symlink pointing to librte_latencystats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.24 00:02:18.149 Installing symlink pointing to librte_latencystats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:02:18.149 Installing symlink pointing to librte_lpm.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.24 00:02:18.149 Installing symlink pointing to librte_lpm.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:02:18.149 Installing symlink pointing to librte_member.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.24 00:02:18.149 Installing symlink pointing to librte_member.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:02:18.149 Installing symlink pointing to librte_pcapng.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.24 00:02:18.149 Installing symlink pointing to librte_pcapng.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:02:18.149 Installing symlink pointing to librte_power.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.24 00:02:18.149 Installing symlink pointing to librte_power.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:02:18.149 Installing symlink pointing to librte_rawdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.24 00:02:18.149 Installing symlink pointing to librte_rawdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:02:18.149 Installing symlink pointing to librte_regexdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.24 00:02:18.149 Installing symlink pointing to librte_regexdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:02:18.149 Installing symlink pointing to librte_mldev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so.24 00:02:18.149 Installing symlink pointing to librte_mldev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so 00:02:18.149 Installing symlink pointing to librte_rib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.24 00:02:18.149 Installing symlink pointing to librte_rib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:02:18.149 Installing symlink pointing to librte_reorder.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.24 00:02:18.149 Installing symlink pointing to librte_reorder.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:02:18.149 Installing symlink pointing to librte_sched.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.24 00:02:18.149 Installing symlink pointing to librte_sched.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:02:18.149 Installing symlink pointing to librte_security.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.24 00:02:18.149 Installing symlink pointing to librte_security.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:02:18.149 './librte_bus_pci.so' -> 'dpdk/pmds-24.0/librte_bus_pci.so' 00:02:18.149 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24' 00:02:18.149 './librte_bus_pci.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24.0' 00:02:18.149 './librte_bus_vdev.so' -> 'dpdk/pmds-24.0/librte_bus_vdev.so' 00:02:18.149 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24' 00:02:18.149 './librte_bus_vdev.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24.0' 00:02:18.149 './librte_mempool_ring.so' -> 'dpdk/pmds-24.0/librte_mempool_ring.so' 00:02:18.149 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24' 00:02:18.149 './librte_mempool_ring.so.24.0' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24.0' 00:02:18.149 './librte_net_i40e.so' -> 'dpdk/pmds-24.0/librte_net_i40e.so' 00:02:18.149 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24' 00:02:18.149 './librte_net_i40e.so.24.0' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24.0' 00:02:18.149 Installing symlink pointing to librte_stack.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.24 00:02:18.149 Installing symlink pointing to librte_stack.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:02:18.149 Installing symlink pointing to librte_vhost.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.24 00:02:18.149 Installing symlink pointing to librte_vhost.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:02:18.149 Installing symlink pointing to librte_ipsec.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.24 00:02:18.149 Installing symlink pointing to librte_ipsec.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:02:18.149 Installing symlink pointing to librte_pdcp.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so.24 00:02:18.149 Installing symlink pointing to librte_pdcp.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so 00:02:18.149 Installing symlink pointing to librte_fib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.24 00:02:18.149 Installing symlink pointing to librte_fib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:02:18.149 Installing symlink pointing to librte_port.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.24 00:02:18.149 Installing symlink pointing to librte_port.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:02:18.149 Installing symlink pointing to librte_pdump.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.24 00:02:18.149 Installing symlink pointing to librte_pdump.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:02:18.149 Installing symlink pointing to librte_table.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.24 00:02:18.149 Installing symlink pointing to librte_table.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:02:18.149 Installing symlink pointing to librte_pipeline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.24 00:02:18.149 Installing symlink pointing to librte_pipeline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:02:18.149 Installing symlink pointing to librte_graph.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.24 00:02:18.149 Installing symlink pointing to librte_graph.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:02:18.149 Installing symlink pointing to librte_node.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.24 00:02:18.149 Installing symlink pointing to librte_node.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:02:18.149 Installing symlink pointing to librte_bus_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24 00:02:18.149 Installing symlink pointing to librte_bus_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:02:18.149 Installing symlink pointing to librte_bus_vdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24 00:02:18.149 Installing symlink pointing to librte_bus_vdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:02:18.149 Installing symlink pointing to librte_mempool_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24 00:02:18.149 Installing symlink pointing to librte_mempool_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:02:18.149 Installing symlink pointing to librte_net_i40e.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24 00:02:18.149 Installing symlink pointing to librte_net_i40e.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:02:18.149 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.0' 00:02:18.149 18:33:18 build_native_dpdk -- common/autobuild_common.sh@220 -- $ cat 00:02:18.149 18:33:18 build_native_dpdk -- common/autobuild_common.sh@225 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:18.149 00:02:18.149 real 0m45.901s 00:02:18.149 user 5m7.038s 00:02:18.149 sys 0m55.765s 00:02:18.149 18:33:18 build_native_dpdk -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:18.149 18:33:18 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:02:18.149 ************************************ 00:02:18.149 END TEST build_native_dpdk 00:02:18.149 ************************************ 00:02:18.149 18:33:18 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:18.149 18:33:18 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:18.149 18:33:18 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:18.149 18:33:18 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:18.149 18:33:18 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:18.149 18:33:18 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:18.149 18:33:18 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:18.149 18:33:18 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-shared 00:02:18.409 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:02:18.667 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:02:18.667 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:02:18.667 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:18.927 Using 'verbs' RDMA provider 00:02:35.211 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:50.123 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:50.692 Creating mk/config.mk...done. 00:02:50.692 Creating mk/cc.flags.mk...done. 00:02:50.692 Type 'make' to build. 00:02:50.692 18:33:50 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:02:50.692 18:33:50 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:50.692 18:33:50 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:50.692 18:33:50 -- common/autotest_common.sh@10 -- $ set +x 00:02:50.692 ************************************ 00:02:50.692 START TEST make 00:02:50.692 ************************************ 00:02:50.692 18:33:50 make -- common/autotest_common.sh@1129 -- $ make -j10 00:03:37.423 CC lib/log/log.o 00:03:37.423 CC lib/ut_mock/mock.o 00:03:37.423 CC lib/log/log_flags.o 00:03:37.423 CC lib/log/log_deprecated.o 00:03:37.423 CC lib/ut/ut.o 00:03:37.423 LIB libspdk_ut_mock.a 00:03:37.423 LIB libspdk_ut.a 00:03:37.423 LIB libspdk_log.a 00:03:37.423 SO libspdk_ut_mock.so.6.0 00:03:37.423 SO libspdk_ut.so.2.0 00:03:37.423 SYMLINK libspdk_ut_mock.so 00:03:37.423 SO libspdk_log.so.7.1 00:03:37.423 SYMLINK libspdk_ut.so 00:03:37.423 SYMLINK libspdk_log.so 00:03:37.424 CC lib/util/cpuset.o 00:03:37.424 CC lib/util/crc16.o 00:03:37.424 CC lib/util/base64.o 00:03:37.424 CC lib/util/bit_array.o 00:03:37.424 CC lib/util/crc32.o 00:03:37.424 CC lib/util/crc32c.o 00:03:37.424 CC lib/ioat/ioat.o 00:03:37.424 CXX lib/trace_parser/trace.o 00:03:37.424 CC lib/dma/dma.o 00:03:37.424 CC lib/util/crc32_ieee.o 00:03:37.424 CC lib/vfio_user/host/vfio_user_pci.o 00:03:37.424 CC lib/util/crc64.o 00:03:37.424 CC lib/vfio_user/host/vfio_user.o 00:03:37.424 CC lib/util/dif.o 00:03:37.424 LIB libspdk_dma.a 00:03:37.424 CC lib/util/fd.o 00:03:37.424 CC lib/util/fd_group.o 00:03:37.424 SO libspdk_dma.so.5.0 00:03:37.424 CC lib/util/file.o 00:03:37.424 CC lib/util/hexlify.o 00:03:37.424 LIB libspdk_ioat.a 00:03:37.424 SYMLINK libspdk_dma.so 00:03:37.424 CC lib/util/iov.o 00:03:37.424 SO libspdk_ioat.so.7.0 00:03:37.424 CC lib/util/math.o 00:03:37.424 SYMLINK libspdk_ioat.so 00:03:37.424 CC lib/util/net.o 00:03:37.424 CC lib/util/pipe.o 00:03:37.424 LIB libspdk_vfio_user.a 00:03:37.424 CC lib/util/strerror_tls.o 00:03:37.424 CC lib/util/string.o 00:03:37.424 SO libspdk_vfio_user.so.5.0 00:03:37.424 SYMLINK libspdk_vfio_user.so 00:03:37.424 CC lib/util/uuid.o 00:03:37.424 CC lib/util/xor.o 00:03:37.424 CC lib/util/zipf.o 00:03:37.424 CC lib/util/md5.o 00:03:37.424 LIB libspdk_util.a 00:03:37.424 SO libspdk_util.so.10.1 00:03:37.424 LIB libspdk_trace_parser.a 00:03:37.424 SO libspdk_trace_parser.so.6.0 00:03:37.424 SYMLINK libspdk_util.so 00:03:37.424 SYMLINK libspdk_trace_parser.so 00:03:37.424 CC lib/conf/conf.o 00:03:37.424 CC lib/env_dpdk/memory.o 00:03:37.424 CC lib/env_dpdk/pci.o 00:03:37.424 CC lib/env_dpdk/threads.o 00:03:37.424 CC lib/env_dpdk/env.o 00:03:37.424 CC lib/env_dpdk/init.o 00:03:37.424 CC lib/idxd/idxd.o 00:03:37.424 CC lib/json/json_parse.o 00:03:37.424 CC lib/rdma_utils/rdma_utils.o 00:03:37.424 CC lib/vmd/vmd.o 00:03:37.424 CC lib/env_dpdk/pci_ioat.o 00:03:37.424 LIB libspdk_conf.a 00:03:37.424 SO libspdk_conf.so.6.0 00:03:37.424 LIB libspdk_rdma_utils.a 00:03:37.424 CC lib/json/json_util.o 00:03:37.424 SO libspdk_rdma_utils.so.1.0 00:03:37.424 SYMLINK libspdk_conf.so 00:03:37.424 CC lib/json/json_write.o 00:03:37.424 CC lib/vmd/led.o 00:03:37.424 CC lib/env_dpdk/pci_virtio.o 00:03:37.424 SYMLINK libspdk_rdma_utils.so 00:03:37.424 CC lib/env_dpdk/pci_vmd.o 00:03:37.424 CC lib/idxd/idxd_user.o 00:03:37.424 CC lib/idxd/idxd_kernel.o 00:03:37.424 CC lib/env_dpdk/pci_idxd.o 00:03:37.424 CC lib/env_dpdk/pci_event.o 00:03:37.424 LIB libspdk_json.a 00:03:37.424 CC lib/env_dpdk/sigbus_handler.o 00:03:37.424 CC lib/env_dpdk/pci_dpdk.o 00:03:37.424 SO libspdk_json.so.6.0 00:03:37.424 CC lib/rdma_provider/common.o 00:03:37.424 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:37.424 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:37.424 LIB libspdk_idxd.a 00:03:37.424 SYMLINK libspdk_json.so 00:03:37.424 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:37.424 SO libspdk_idxd.so.12.1 00:03:37.424 SYMLINK libspdk_idxd.so 00:03:37.424 LIB libspdk_vmd.a 00:03:37.424 SO libspdk_vmd.so.6.0 00:03:37.424 LIB libspdk_rdma_provider.a 00:03:37.424 CC lib/jsonrpc/jsonrpc_server.o 00:03:37.424 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:37.424 CC lib/jsonrpc/jsonrpc_client.o 00:03:37.424 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:37.424 SYMLINK libspdk_vmd.so 00:03:37.424 SO libspdk_rdma_provider.so.7.0 00:03:37.424 SYMLINK libspdk_rdma_provider.so 00:03:37.424 LIB libspdk_jsonrpc.a 00:03:37.424 SO libspdk_jsonrpc.so.6.0 00:03:37.424 SYMLINK libspdk_jsonrpc.so 00:03:37.424 LIB libspdk_env_dpdk.a 00:03:37.424 SO libspdk_env_dpdk.so.15.1 00:03:37.424 CC lib/rpc/rpc.o 00:03:37.424 SYMLINK libspdk_env_dpdk.so 00:03:37.424 LIB libspdk_rpc.a 00:03:37.424 SO libspdk_rpc.so.6.0 00:03:37.683 SYMLINK libspdk_rpc.so 00:03:37.942 CC lib/trace/trace.o 00:03:37.942 CC lib/trace/trace_rpc.o 00:03:37.942 CC lib/trace/trace_flags.o 00:03:37.942 CC lib/keyring/keyring.o 00:03:37.942 CC lib/keyring/keyring_rpc.o 00:03:37.942 CC lib/notify/notify.o 00:03:37.942 CC lib/notify/notify_rpc.o 00:03:38.202 LIB libspdk_notify.a 00:03:38.202 SO libspdk_notify.so.6.0 00:03:38.202 LIB libspdk_trace.a 00:03:38.202 LIB libspdk_keyring.a 00:03:38.202 SYMLINK libspdk_notify.so 00:03:38.202 SO libspdk_trace.so.11.0 00:03:38.202 SO libspdk_keyring.so.2.0 00:03:38.462 SYMLINK libspdk_keyring.so 00:03:38.462 SYMLINK libspdk_trace.so 00:03:38.722 CC lib/thread/thread.o 00:03:38.722 CC lib/thread/iobuf.o 00:03:38.722 CC lib/sock/sock.o 00:03:38.722 CC lib/sock/sock_rpc.o 00:03:39.290 LIB libspdk_sock.a 00:03:39.290 SO libspdk_sock.so.10.0 00:03:39.290 SYMLINK libspdk_sock.so 00:03:39.857 CC lib/nvme/nvme_ctrlr.o 00:03:39.857 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:39.857 CC lib/nvme/nvme_pcie_common.o 00:03:39.857 CC lib/nvme/nvme_fabric.o 00:03:39.857 CC lib/nvme/nvme_ns_cmd.o 00:03:39.857 CC lib/nvme/nvme_ns.o 00:03:39.857 CC lib/nvme/nvme_qpair.o 00:03:39.857 CC lib/nvme/nvme_pcie.o 00:03:39.857 CC lib/nvme/nvme.o 00:03:40.423 LIB libspdk_thread.a 00:03:40.423 CC lib/nvme/nvme_quirks.o 00:03:40.423 CC lib/nvme/nvme_transport.o 00:03:40.681 SO libspdk_thread.so.11.0 00:03:40.681 CC lib/nvme/nvme_discovery.o 00:03:40.681 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:40.681 SYMLINK libspdk_thread.so 00:03:40.681 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:40.681 CC lib/nvme/nvme_tcp.o 00:03:40.681 CC lib/nvme/nvme_opal.o 00:03:40.939 CC lib/nvme/nvme_io_msg.o 00:03:40.939 CC lib/nvme/nvme_poll_group.o 00:03:41.197 CC lib/nvme/nvme_zns.o 00:03:41.197 CC lib/nvme/nvme_stubs.o 00:03:41.197 CC lib/nvme/nvme_auth.o 00:03:41.197 CC lib/nvme/nvme_cuse.o 00:03:41.455 CC lib/nvme/nvme_rdma.o 00:03:41.455 CC lib/accel/accel.o 00:03:41.455 CC lib/blob/blobstore.o 00:03:41.713 CC lib/blob/request.o 00:03:41.713 CC lib/blob/zeroes.o 00:03:41.713 CC lib/blob/blob_bs_dev.o 00:03:41.713 CC lib/accel/accel_rpc.o 00:03:41.971 CC lib/accel/accel_sw.o 00:03:42.247 CC lib/init/json_config.o 00:03:42.247 CC lib/init/subsystem.o 00:03:42.247 CC lib/virtio/virtio.o 00:03:42.522 CC lib/virtio/virtio_vhost_user.o 00:03:42.522 CC lib/init/subsystem_rpc.o 00:03:42.522 CC lib/fsdev/fsdev.o 00:03:42.522 CC lib/init/rpc.o 00:03:42.522 CC lib/virtio/virtio_vfio_user.o 00:03:42.522 CC lib/fsdev/fsdev_io.o 00:03:42.522 CC lib/virtio/virtio_pci.o 00:03:42.522 LIB libspdk_init.a 00:03:42.522 SO libspdk_init.so.6.0 00:03:42.780 CC lib/fsdev/fsdev_rpc.o 00:03:42.780 SYMLINK libspdk_init.so 00:03:42.780 LIB libspdk_accel.a 00:03:42.780 SO libspdk_accel.so.16.0 00:03:42.780 LIB libspdk_virtio.a 00:03:43.038 SO libspdk_virtio.so.7.0 00:03:43.038 SYMLINK libspdk_accel.so 00:03:43.038 CC lib/event/app.o 00:03:43.038 CC lib/event/app_rpc.o 00:03:43.038 CC lib/event/log_rpc.o 00:03:43.038 CC lib/event/reactor.o 00:03:43.038 CC lib/event/scheduler_static.o 00:03:43.038 SYMLINK libspdk_virtio.so 00:03:43.038 LIB libspdk_fsdev.a 00:03:43.038 LIB libspdk_nvme.a 00:03:43.038 SO libspdk_fsdev.so.2.0 00:03:43.296 CC lib/bdev/bdev.o 00:03:43.296 CC lib/bdev/bdev_rpc.o 00:03:43.296 CC lib/bdev/part.o 00:03:43.296 CC lib/bdev/bdev_zone.o 00:03:43.296 SYMLINK libspdk_fsdev.so 00:03:43.296 CC lib/bdev/scsi_nvme.o 00:03:43.296 SO libspdk_nvme.so.15.0 00:03:43.554 LIB libspdk_event.a 00:03:43.554 SO libspdk_event.so.14.0 00:03:43.554 SYMLINK libspdk_nvme.so 00:03:43.554 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:43.554 SYMLINK libspdk_event.so 00:03:44.488 LIB libspdk_fuse_dispatcher.a 00:03:44.488 SO libspdk_fuse_dispatcher.so.1.0 00:03:44.488 SYMLINK libspdk_fuse_dispatcher.so 00:03:45.423 LIB libspdk_blob.a 00:03:45.423 SO libspdk_blob.so.12.0 00:03:45.423 SYMLINK libspdk_blob.so 00:03:45.991 CC lib/blobfs/blobfs.o 00:03:45.991 CC lib/blobfs/tree.o 00:03:45.991 CC lib/lvol/lvol.o 00:03:46.251 LIB libspdk_bdev.a 00:03:46.251 SO libspdk_bdev.so.17.0 00:03:46.508 SYMLINK libspdk_bdev.so 00:03:46.766 CC lib/ftl/ftl_init.o 00:03:46.766 CC lib/ftl/ftl_core.o 00:03:46.766 CC lib/ftl/ftl_debug.o 00:03:46.766 CC lib/ftl/ftl_layout.o 00:03:46.766 CC lib/nbd/nbd.o 00:03:46.766 CC lib/ublk/ublk.o 00:03:46.766 LIB libspdk_blobfs.a 00:03:46.766 CC lib/scsi/dev.o 00:03:46.766 CC lib/nvmf/ctrlr.o 00:03:46.767 SO libspdk_blobfs.so.11.0 00:03:46.767 SYMLINK libspdk_blobfs.so 00:03:46.767 CC lib/scsi/lun.o 00:03:46.767 LIB libspdk_lvol.a 00:03:47.025 CC lib/scsi/port.o 00:03:47.025 SO libspdk_lvol.so.11.0 00:03:47.025 CC lib/scsi/scsi.o 00:03:47.025 CC lib/nvmf/ctrlr_discovery.o 00:03:47.025 SYMLINK libspdk_lvol.so 00:03:47.025 CC lib/nvmf/ctrlr_bdev.o 00:03:47.025 CC lib/nbd/nbd_rpc.o 00:03:47.025 CC lib/scsi/scsi_bdev.o 00:03:47.025 CC lib/scsi/scsi_pr.o 00:03:47.025 CC lib/ftl/ftl_io.o 00:03:47.025 CC lib/ftl/ftl_sb.o 00:03:47.025 CC lib/ftl/ftl_l2p.o 00:03:47.283 LIB libspdk_nbd.a 00:03:47.283 SO libspdk_nbd.so.7.0 00:03:47.283 SYMLINK libspdk_nbd.so 00:03:47.283 CC lib/ftl/ftl_l2p_flat.o 00:03:47.283 CC lib/ublk/ublk_rpc.o 00:03:47.283 CC lib/ftl/ftl_nv_cache.o 00:03:47.283 CC lib/scsi/scsi_rpc.o 00:03:47.283 CC lib/scsi/task.o 00:03:47.542 CC lib/ftl/ftl_band.o 00:03:47.542 CC lib/nvmf/subsystem.o 00:03:47.542 LIB libspdk_ublk.a 00:03:47.542 CC lib/nvmf/nvmf.o 00:03:47.542 CC lib/ftl/ftl_band_ops.o 00:03:47.542 SO libspdk_ublk.so.3.0 00:03:47.542 SYMLINK libspdk_ublk.so 00:03:47.542 CC lib/nvmf/nvmf_rpc.o 00:03:47.542 CC lib/nvmf/transport.o 00:03:47.542 LIB libspdk_scsi.a 00:03:47.801 SO libspdk_scsi.so.9.0 00:03:47.801 CC lib/nvmf/tcp.o 00:03:47.801 SYMLINK libspdk_scsi.so 00:03:47.801 CC lib/ftl/ftl_writer.o 00:03:47.801 CC lib/nvmf/stubs.o 00:03:47.801 CC lib/ftl/ftl_rq.o 00:03:48.059 CC lib/nvmf/mdns_server.o 00:03:48.059 CC lib/nvmf/rdma.o 00:03:48.317 CC lib/nvmf/auth.o 00:03:48.317 CC lib/ftl/ftl_reloc.o 00:03:48.574 CC lib/ftl/ftl_l2p_cache.o 00:03:48.574 CC lib/ftl/ftl_p2l.o 00:03:48.574 CC lib/ftl/ftl_p2l_log.o 00:03:48.574 CC lib/ftl/mngt/ftl_mngt.o 00:03:48.833 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:48.833 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:48.833 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:48.833 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:49.092 CC lib/iscsi/conn.o 00:03:49.092 CC lib/iscsi/init_grp.o 00:03:49.092 CC lib/iscsi/iscsi.o 00:03:49.092 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:49.092 CC lib/vhost/vhost.o 00:03:49.092 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:49.092 CC lib/vhost/vhost_rpc.o 00:03:49.351 CC lib/vhost/vhost_scsi.o 00:03:49.351 CC lib/iscsi/param.o 00:03:49.351 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:49.351 CC lib/iscsi/portal_grp.o 00:03:49.351 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:49.609 CC lib/iscsi/tgt_node.o 00:03:49.609 CC lib/iscsi/iscsi_subsystem.o 00:03:49.609 CC lib/iscsi/iscsi_rpc.o 00:03:49.609 CC lib/iscsi/task.o 00:03:49.883 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:49.883 CC lib/vhost/vhost_blk.o 00:03:49.883 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:49.883 CC lib/vhost/rte_vhost_user.o 00:03:49.883 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:50.142 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:50.142 CC lib/ftl/utils/ftl_conf.o 00:03:50.142 CC lib/ftl/utils/ftl_md.o 00:03:50.142 CC lib/ftl/utils/ftl_mempool.o 00:03:50.142 CC lib/ftl/utils/ftl_bitmap.o 00:03:50.142 CC lib/ftl/utils/ftl_property.o 00:03:50.142 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:50.400 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:50.400 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:50.400 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:50.400 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:50.400 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:50.400 LIB libspdk_nvmf.a 00:03:50.400 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:50.659 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:50.659 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:50.659 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:50.659 LIB libspdk_iscsi.a 00:03:50.659 SO libspdk_nvmf.so.20.0 00:03:50.659 SO libspdk_iscsi.so.8.0 00:03:50.659 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:50.659 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:50.659 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:50.659 CC lib/ftl/base/ftl_base_dev.o 00:03:50.659 CC lib/ftl/base/ftl_base_bdev.o 00:03:50.659 CC lib/ftl/ftl_trace.o 00:03:50.918 SYMLINK libspdk_iscsi.so 00:03:50.918 SYMLINK libspdk_nvmf.so 00:03:50.918 LIB libspdk_ftl.a 00:03:50.918 LIB libspdk_vhost.a 00:03:51.178 SO libspdk_vhost.so.8.0 00:03:51.178 SYMLINK libspdk_vhost.so 00:03:51.178 SO libspdk_ftl.so.9.0 00:03:51.438 SYMLINK libspdk_ftl.so 00:03:52.006 CC module/env_dpdk/env_dpdk_rpc.o 00:03:52.006 CC module/accel/error/accel_error.o 00:03:52.006 CC module/accel/ioat/accel_ioat.o 00:03:52.006 CC module/accel/iaa/accel_iaa.o 00:03:52.006 CC module/keyring/file/keyring.o 00:03:52.006 CC module/blob/bdev/blob_bdev.o 00:03:52.006 CC module/accel/dsa/accel_dsa.o 00:03:52.006 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:52.006 CC module/sock/posix/posix.o 00:03:52.006 CC module/fsdev/aio/fsdev_aio.o 00:03:52.006 LIB libspdk_env_dpdk_rpc.a 00:03:52.006 SO libspdk_env_dpdk_rpc.so.6.0 00:03:52.006 SYMLINK libspdk_env_dpdk_rpc.so 00:03:52.006 CC module/keyring/file/keyring_rpc.o 00:03:52.264 CC module/accel/ioat/accel_ioat_rpc.o 00:03:52.264 CC module/accel/iaa/accel_iaa_rpc.o 00:03:52.264 LIB libspdk_scheduler_dynamic.a 00:03:52.264 CC module/accel/error/accel_error_rpc.o 00:03:52.264 SO libspdk_scheduler_dynamic.so.4.0 00:03:52.264 LIB libspdk_keyring_file.a 00:03:52.264 CC module/keyring/linux/keyring.o 00:03:52.264 LIB libspdk_blob_bdev.a 00:03:52.264 SYMLINK libspdk_scheduler_dynamic.so 00:03:52.264 SO libspdk_keyring_file.so.2.0 00:03:52.264 CC module/accel/dsa/accel_dsa_rpc.o 00:03:52.264 SO libspdk_blob_bdev.so.12.0 00:03:52.264 LIB libspdk_accel_ioat.a 00:03:52.264 LIB libspdk_accel_iaa.a 00:03:52.264 SO libspdk_accel_ioat.so.6.0 00:03:52.264 LIB libspdk_accel_error.a 00:03:52.264 SYMLINK libspdk_keyring_file.so 00:03:52.264 SO libspdk_accel_iaa.so.3.0 00:03:52.264 SYMLINK libspdk_blob_bdev.so 00:03:52.264 SO libspdk_accel_error.so.2.0 00:03:52.264 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:52.264 SYMLINK libspdk_accel_ioat.so 00:03:52.264 CC module/fsdev/aio/linux_aio_mgr.o 00:03:52.264 SYMLINK libspdk_accel_iaa.so 00:03:52.264 CC module/keyring/linux/keyring_rpc.o 00:03:52.264 LIB libspdk_accel_dsa.a 00:03:52.522 SYMLINK libspdk_accel_error.so 00:03:52.522 SO libspdk_accel_dsa.so.5.0 00:03:52.522 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:52.522 LIB libspdk_keyring_linux.a 00:03:52.522 SYMLINK libspdk_accel_dsa.so 00:03:52.522 CC module/scheduler/gscheduler/gscheduler.o 00:03:52.522 SO libspdk_keyring_linux.so.1.0 00:03:52.522 LIB libspdk_scheduler_dpdk_governor.a 00:03:52.522 SYMLINK libspdk_keyring_linux.so 00:03:52.522 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:52.780 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:52.780 LIB libspdk_scheduler_gscheduler.a 00:03:52.780 CC module/bdev/gpt/gpt.o 00:03:52.780 CC module/bdev/delay/vbdev_delay.o 00:03:52.780 CC module/bdev/error/vbdev_error.o 00:03:52.780 LIB libspdk_fsdev_aio.a 00:03:52.780 CC module/blobfs/bdev/blobfs_bdev.o 00:03:52.780 SO libspdk_scheduler_gscheduler.so.4.0 00:03:52.780 CC module/bdev/lvol/vbdev_lvol.o 00:03:52.780 SO libspdk_fsdev_aio.so.1.0 00:03:52.780 CC module/bdev/malloc/bdev_malloc.o 00:03:52.780 SYMLINK libspdk_scheduler_gscheduler.so 00:03:52.780 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:52.780 LIB libspdk_sock_posix.a 00:03:52.780 SYMLINK libspdk_fsdev_aio.so 00:03:52.780 CC module/bdev/null/bdev_null.o 00:03:52.780 SO libspdk_sock_posix.so.6.0 00:03:52.780 CC module/bdev/gpt/vbdev_gpt.o 00:03:52.781 CC module/bdev/null/bdev_null_rpc.o 00:03:53.054 SYMLINK libspdk_sock_posix.so 00:03:53.054 CC module/bdev/error/vbdev_error_rpc.o 00:03:53.054 LIB libspdk_blobfs_bdev.a 00:03:53.054 SO libspdk_blobfs_bdev.so.6.0 00:03:53.054 CC module/bdev/nvme/bdev_nvme.o 00:03:53.054 SYMLINK libspdk_blobfs_bdev.so 00:03:53.054 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:53.054 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:53.054 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:53.054 LIB libspdk_bdev_error.a 00:03:53.054 CC module/bdev/passthru/vbdev_passthru.o 00:03:53.054 SO libspdk_bdev_error.so.6.0 00:03:53.054 LIB libspdk_bdev_null.a 00:03:53.054 SO libspdk_bdev_null.so.6.0 00:03:53.054 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:53.054 LIB libspdk_bdev_gpt.a 00:03:53.054 SYMLINK libspdk_bdev_error.so 00:03:53.312 SO libspdk_bdev_gpt.so.6.0 00:03:53.313 SYMLINK libspdk_bdev_null.so 00:03:53.313 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:53.313 LIB libspdk_bdev_delay.a 00:03:53.313 SYMLINK libspdk_bdev_gpt.so 00:03:53.313 CC module/bdev/nvme/nvme_rpc.o 00:03:53.313 SO libspdk_bdev_delay.so.6.0 00:03:53.313 LIB libspdk_bdev_malloc.a 00:03:53.313 CC module/bdev/raid/bdev_raid.o 00:03:53.313 SO libspdk_bdev_malloc.so.6.0 00:03:53.313 SYMLINK libspdk_bdev_delay.so 00:03:53.313 SYMLINK libspdk_bdev_malloc.so 00:03:53.313 LIB libspdk_bdev_passthru.a 00:03:53.313 CC module/bdev/split/vbdev_split.o 00:03:53.313 LIB libspdk_bdev_lvol.a 00:03:53.570 SO libspdk_bdev_passthru.so.6.0 00:03:53.570 SO libspdk_bdev_lvol.so.6.0 00:03:53.570 CC module/bdev/split/vbdev_split_rpc.o 00:03:53.570 SYMLINK libspdk_bdev_passthru.so 00:03:53.570 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:53.570 SYMLINK libspdk_bdev_lvol.so 00:03:53.570 CC module/bdev/aio/bdev_aio.o 00:03:53.570 CC module/bdev/ftl/bdev_ftl.o 00:03:53.570 CC module/bdev/aio/bdev_aio_rpc.o 00:03:53.570 LIB libspdk_bdev_split.a 00:03:53.570 CC module/bdev/iscsi/bdev_iscsi.o 00:03:53.829 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:53.829 SO libspdk_bdev_split.so.6.0 00:03:53.829 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:53.829 SYMLINK libspdk_bdev_split.so 00:03:53.829 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:53.829 CC module/bdev/raid/bdev_raid_rpc.o 00:03:53.829 CC module/bdev/raid/bdev_raid_sb.o 00:03:53.829 CC module/bdev/raid/raid0.o 00:03:53.829 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:53.829 LIB libspdk_bdev_aio.a 00:03:53.829 LIB libspdk_bdev_zone_block.a 00:03:54.088 SO libspdk_bdev_zone_block.so.6.0 00:03:54.088 SO libspdk_bdev_aio.so.6.0 00:03:54.088 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:54.088 SYMLINK libspdk_bdev_aio.so 00:03:54.088 CC module/bdev/raid/raid1.o 00:03:54.088 SYMLINK libspdk_bdev_zone_block.so 00:03:54.088 CC module/bdev/raid/concat.o 00:03:54.088 LIB libspdk_bdev_iscsi.a 00:03:54.088 LIB libspdk_bdev_ftl.a 00:03:54.088 SO libspdk_bdev_iscsi.so.6.0 00:03:54.088 CC module/bdev/raid/raid5f.o 00:03:54.088 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:54.088 SO libspdk_bdev_ftl.so.6.0 00:03:54.088 SYMLINK libspdk_bdev_iscsi.so 00:03:54.088 CC module/bdev/nvme/bdev_mdns_client.o 00:03:54.088 SYMLINK libspdk_bdev_ftl.so 00:03:54.088 CC module/bdev/nvme/vbdev_opal.o 00:03:54.347 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:54.347 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:54.347 LIB libspdk_bdev_virtio.a 00:03:54.347 SO libspdk_bdev_virtio.so.6.0 00:03:54.347 SYMLINK libspdk_bdev_virtio.so 00:03:54.607 LIB libspdk_bdev_raid.a 00:03:54.607 SO libspdk_bdev_raid.so.6.0 00:03:54.867 SYMLINK libspdk_bdev_raid.so 00:03:55.805 LIB libspdk_bdev_nvme.a 00:03:55.805 SO libspdk_bdev_nvme.so.7.1 00:03:56.065 SYMLINK libspdk_bdev_nvme.so 00:03:56.635 CC module/event/subsystems/vmd/vmd.o 00:03:56.635 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:56.635 CC module/event/subsystems/sock/sock.o 00:03:56.635 CC module/event/subsystems/keyring/keyring.o 00:03:56.635 CC module/event/subsystems/iobuf/iobuf.o 00:03:56.635 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:56.635 CC module/event/subsystems/fsdev/fsdev.o 00:03:56.635 CC module/event/subsystems/scheduler/scheduler.o 00:03:56.635 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:56.895 LIB libspdk_event_sock.a 00:03:56.895 LIB libspdk_event_vmd.a 00:03:56.895 LIB libspdk_event_keyring.a 00:03:56.895 LIB libspdk_event_fsdev.a 00:03:56.895 LIB libspdk_event_scheduler.a 00:03:56.895 LIB libspdk_event_vhost_blk.a 00:03:56.895 LIB libspdk_event_iobuf.a 00:03:56.895 SO libspdk_event_sock.so.5.0 00:03:56.895 SO libspdk_event_keyring.so.1.0 00:03:56.895 SO libspdk_event_fsdev.so.1.0 00:03:56.895 SO libspdk_event_vmd.so.6.0 00:03:56.895 SO libspdk_event_scheduler.so.4.0 00:03:56.895 SO libspdk_event_vhost_blk.so.3.0 00:03:56.895 SO libspdk_event_iobuf.so.3.0 00:03:56.895 SYMLINK libspdk_event_sock.so 00:03:56.895 SYMLINK libspdk_event_keyring.so 00:03:56.895 SYMLINK libspdk_event_fsdev.so 00:03:56.895 SYMLINK libspdk_event_scheduler.so 00:03:56.895 SYMLINK libspdk_event_vmd.so 00:03:56.895 SYMLINK libspdk_event_vhost_blk.so 00:03:56.895 SYMLINK libspdk_event_iobuf.so 00:03:57.465 CC module/event/subsystems/accel/accel.o 00:03:57.465 LIB libspdk_event_accel.a 00:03:57.465 SO libspdk_event_accel.so.6.0 00:03:57.725 SYMLINK libspdk_event_accel.so 00:03:57.985 CC module/event/subsystems/bdev/bdev.o 00:03:58.244 LIB libspdk_event_bdev.a 00:03:58.244 SO libspdk_event_bdev.so.6.0 00:03:58.505 SYMLINK libspdk_event_bdev.so 00:03:58.764 CC module/event/subsystems/ublk/ublk.o 00:03:58.764 CC module/event/subsystems/scsi/scsi.o 00:03:58.764 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:58.764 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:58.764 CC module/event/subsystems/nbd/nbd.o 00:03:59.023 LIB libspdk_event_ublk.a 00:03:59.023 LIB libspdk_event_nbd.a 00:03:59.023 LIB libspdk_event_scsi.a 00:03:59.023 SO libspdk_event_ublk.so.3.0 00:03:59.023 SO libspdk_event_nbd.so.6.0 00:03:59.023 SO libspdk_event_scsi.so.6.0 00:03:59.023 LIB libspdk_event_nvmf.a 00:03:59.023 SYMLINK libspdk_event_ublk.so 00:03:59.023 SYMLINK libspdk_event_nbd.so 00:03:59.023 SYMLINK libspdk_event_scsi.so 00:03:59.023 SO libspdk_event_nvmf.so.6.0 00:03:59.023 SYMLINK libspdk_event_nvmf.so 00:03:59.594 CC module/event/subsystems/iscsi/iscsi.o 00:03:59.594 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:59.594 LIB libspdk_event_iscsi.a 00:03:59.594 LIB libspdk_event_vhost_scsi.a 00:03:59.594 SO libspdk_event_vhost_scsi.so.3.0 00:03:59.594 SO libspdk_event_iscsi.so.6.0 00:03:59.854 SYMLINK libspdk_event_vhost_scsi.so 00:03:59.854 SYMLINK libspdk_event_iscsi.so 00:03:59.854 SO libspdk.so.6.0 00:03:59.854 SYMLINK libspdk.so 00:04:00.424 CXX app/trace/trace.o 00:04:00.424 TEST_HEADER include/spdk/accel.h 00:04:00.424 TEST_HEADER include/spdk/accel_module.h 00:04:00.424 TEST_HEADER include/spdk/assert.h 00:04:00.424 CC test/rpc_client/rpc_client_test.o 00:04:00.424 TEST_HEADER include/spdk/barrier.h 00:04:00.424 TEST_HEADER include/spdk/base64.h 00:04:00.424 TEST_HEADER include/spdk/bdev.h 00:04:00.424 TEST_HEADER include/spdk/bdev_module.h 00:04:00.424 TEST_HEADER include/spdk/bdev_zone.h 00:04:00.424 TEST_HEADER include/spdk/bit_array.h 00:04:00.424 TEST_HEADER include/spdk/bit_pool.h 00:04:00.424 TEST_HEADER include/spdk/blob_bdev.h 00:04:00.424 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:00.424 TEST_HEADER include/spdk/blobfs.h 00:04:00.424 TEST_HEADER include/spdk/blob.h 00:04:00.424 TEST_HEADER include/spdk/conf.h 00:04:00.424 TEST_HEADER include/spdk/config.h 00:04:00.424 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:00.424 TEST_HEADER include/spdk/cpuset.h 00:04:00.424 TEST_HEADER include/spdk/crc16.h 00:04:00.424 TEST_HEADER include/spdk/crc32.h 00:04:00.424 TEST_HEADER include/spdk/crc64.h 00:04:00.424 TEST_HEADER include/spdk/dif.h 00:04:00.424 TEST_HEADER include/spdk/dma.h 00:04:00.424 TEST_HEADER include/spdk/endian.h 00:04:00.424 TEST_HEADER include/spdk/env_dpdk.h 00:04:00.424 TEST_HEADER include/spdk/env.h 00:04:00.424 TEST_HEADER include/spdk/event.h 00:04:00.424 TEST_HEADER include/spdk/fd_group.h 00:04:00.424 TEST_HEADER include/spdk/fd.h 00:04:00.424 TEST_HEADER include/spdk/file.h 00:04:00.424 TEST_HEADER include/spdk/fsdev.h 00:04:00.424 TEST_HEADER include/spdk/fsdev_module.h 00:04:00.424 TEST_HEADER include/spdk/ftl.h 00:04:00.424 TEST_HEADER include/spdk/gpt_spec.h 00:04:00.424 TEST_HEADER include/spdk/hexlify.h 00:04:00.424 TEST_HEADER include/spdk/histogram_data.h 00:04:00.424 TEST_HEADER include/spdk/idxd.h 00:04:00.424 TEST_HEADER include/spdk/idxd_spec.h 00:04:00.424 TEST_HEADER include/spdk/init.h 00:04:00.424 TEST_HEADER include/spdk/ioat.h 00:04:00.424 TEST_HEADER include/spdk/ioat_spec.h 00:04:00.424 CC examples/util/zipf/zipf.o 00:04:00.424 TEST_HEADER include/spdk/iscsi_spec.h 00:04:00.424 CC test/thread/poller_perf/poller_perf.o 00:04:00.424 TEST_HEADER include/spdk/json.h 00:04:00.424 TEST_HEADER include/spdk/jsonrpc.h 00:04:00.424 TEST_HEADER include/spdk/keyring.h 00:04:00.424 TEST_HEADER include/spdk/keyring_module.h 00:04:00.424 CC examples/ioat/perf/perf.o 00:04:00.424 TEST_HEADER include/spdk/likely.h 00:04:00.424 TEST_HEADER include/spdk/log.h 00:04:00.424 TEST_HEADER include/spdk/lvol.h 00:04:00.424 TEST_HEADER include/spdk/md5.h 00:04:00.424 TEST_HEADER include/spdk/memory.h 00:04:00.424 TEST_HEADER include/spdk/mmio.h 00:04:00.424 TEST_HEADER include/spdk/nbd.h 00:04:00.424 TEST_HEADER include/spdk/net.h 00:04:00.424 TEST_HEADER include/spdk/notify.h 00:04:00.424 TEST_HEADER include/spdk/nvme.h 00:04:00.424 TEST_HEADER include/spdk/nvme_intel.h 00:04:00.424 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:00.424 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:00.424 TEST_HEADER include/spdk/nvme_spec.h 00:04:00.424 TEST_HEADER include/spdk/nvme_zns.h 00:04:00.424 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:00.424 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:00.424 CC test/dma/test_dma/test_dma.o 00:04:00.424 TEST_HEADER include/spdk/nvmf.h 00:04:00.424 CC test/app/bdev_svc/bdev_svc.o 00:04:00.424 TEST_HEADER include/spdk/nvmf_spec.h 00:04:00.424 TEST_HEADER include/spdk/nvmf_transport.h 00:04:00.424 TEST_HEADER include/spdk/opal.h 00:04:00.424 TEST_HEADER include/spdk/opal_spec.h 00:04:00.424 TEST_HEADER include/spdk/pci_ids.h 00:04:00.424 TEST_HEADER include/spdk/pipe.h 00:04:00.424 TEST_HEADER include/spdk/queue.h 00:04:00.424 TEST_HEADER include/spdk/reduce.h 00:04:00.424 TEST_HEADER include/spdk/rpc.h 00:04:00.424 TEST_HEADER include/spdk/scheduler.h 00:04:00.424 TEST_HEADER include/spdk/scsi.h 00:04:00.424 TEST_HEADER include/spdk/scsi_spec.h 00:04:00.424 TEST_HEADER include/spdk/sock.h 00:04:00.424 TEST_HEADER include/spdk/stdinc.h 00:04:00.424 TEST_HEADER include/spdk/string.h 00:04:00.424 TEST_HEADER include/spdk/thread.h 00:04:00.424 TEST_HEADER include/spdk/trace.h 00:04:00.424 TEST_HEADER include/spdk/trace_parser.h 00:04:00.424 TEST_HEADER include/spdk/tree.h 00:04:00.424 TEST_HEADER include/spdk/ublk.h 00:04:00.424 CC test/env/mem_callbacks/mem_callbacks.o 00:04:00.424 TEST_HEADER include/spdk/util.h 00:04:00.424 TEST_HEADER include/spdk/uuid.h 00:04:00.424 TEST_HEADER include/spdk/version.h 00:04:00.424 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:00.424 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:00.424 TEST_HEADER include/spdk/vhost.h 00:04:00.424 LINK rpc_client_test 00:04:00.424 TEST_HEADER include/spdk/vmd.h 00:04:00.424 TEST_HEADER include/spdk/xor.h 00:04:00.424 TEST_HEADER include/spdk/zipf.h 00:04:00.425 CXX test/cpp_headers/accel.o 00:04:00.425 LINK poller_perf 00:04:00.425 LINK zipf 00:04:00.425 LINK interrupt_tgt 00:04:00.706 LINK bdev_svc 00:04:00.706 LINK ioat_perf 00:04:00.706 CXX test/cpp_headers/accel_module.o 00:04:00.706 LINK spdk_trace 00:04:00.706 CC examples/ioat/verify/verify.o 00:04:00.706 CC app/trace_record/trace_record.o 00:04:00.707 CXX test/cpp_headers/assert.o 00:04:00.707 CC test/env/vtophys/vtophys.o 00:04:00.707 CXX test/cpp_headers/barrier.o 00:04:00.988 CC test/event/event_perf/event_perf.o 00:04:00.988 CC test/event/reactor/reactor.o 00:04:00.988 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:00.988 LINK test_dma 00:04:00.988 LINK verify 00:04:00.988 LINK vtophys 00:04:00.988 CXX test/cpp_headers/base64.o 00:04:00.988 LINK spdk_trace_record 00:04:00.988 LINK reactor 00:04:00.988 LINK event_perf 00:04:00.988 LINK mem_callbacks 00:04:00.988 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:01.247 CXX test/cpp_headers/bdev.o 00:04:01.247 LINK env_dpdk_post_init 00:04:01.247 CC app/nvmf_tgt/nvmf_main.o 00:04:01.247 CC test/event/reactor_perf/reactor_perf.o 00:04:01.247 CC app/iscsi_tgt/iscsi_tgt.o 00:04:01.247 CC examples/thread/thread/thread_ex.o 00:04:01.247 CC examples/sock/hello_world/hello_sock.o 00:04:01.247 CXX test/cpp_headers/bdev_module.o 00:04:01.247 LINK nvme_fuzz 00:04:01.247 CC app/spdk_tgt/spdk_tgt.o 00:04:01.506 CC test/accel/dif/dif.o 00:04:01.506 LINK reactor_perf 00:04:01.506 LINK nvmf_tgt 00:04:01.506 LINK iscsi_tgt 00:04:01.506 CC test/env/memory/memory_ut.o 00:04:01.506 CXX test/cpp_headers/bdev_zone.o 00:04:01.506 LINK spdk_tgt 00:04:01.506 LINK thread 00:04:01.506 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:01.506 LINK hello_sock 00:04:01.765 CC test/event/app_repeat/app_repeat.o 00:04:01.765 CXX test/cpp_headers/bit_array.o 00:04:01.765 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:01.765 CC app/spdk_lspci/spdk_lspci.o 00:04:01.765 CC app/spdk_nvme_perf/perf.o 00:04:01.765 LINK app_repeat 00:04:01.765 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:01.765 CC test/blobfs/mkfs/mkfs.o 00:04:01.765 CXX test/cpp_headers/bit_pool.o 00:04:02.023 CC examples/vmd/lsvmd/lsvmd.o 00:04:02.023 LINK spdk_lspci 00:04:02.023 CXX test/cpp_headers/blob_bdev.o 00:04:02.023 LINK mkfs 00:04:02.023 LINK lsvmd 00:04:02.282 CC test/event/scheduler/scheduler.o 00:04:02.282 LINK dif 00:04:02.282 CXX test/cpp_headers/blobfs_bdev.o 00:04:02.282 CC test/env/pci/pci_ut.o 00:04:02.282 LINK vhost_fuzz 00:04:02.282 CC app/spdk_nvme_identify/identify.o 00:04:02.282 CC examples/vmd/led/led.o 00:04:02.282 LINK scheduler 00:04:02.540 CXX test/cpp_headers/blobfs.o 00:04:02.540 CC app/spdk_nvme_discover/discovery_aer.o 00:04:02.540 CXX test/cpp_headers/blob.o 00:04:02.540 LINK led 00:04:02.540 CXX test/cpp_headers/conf.o 00:04:02.540 CXX test/cpp_headers/config.o 00:04:02.799 LINK spdk_nvme_discover 00:04:02.799 LINK memory_ut 00:04:02.799 LINK pci_ut 00:04:02.799 CXX test/cpp_headers/cpuset.o 00:04:02.799 LINK spdk_nvme_perf 00:04:02.799 CC test/nvme/aer/aer.o 00:04:03.057 CC examples/idxd/perf/perf.o 00:04:03.057 CXX test/cpp_headers/crc16.o 00:04:03.057 CC test/lvol/esnap/esnap.o 00:04:03.057 CXX test/cpp_headers/crc32.o 00:04:03.057 CC test/nvme/reset/reset.o 00:04:03.057 CXX test/cpp_headers/crc64.o 00:04:03.057 CC test/nvme/sgl/sgl.o 00:04:03.316 CC test/nvme/e2edp/nvme_dp.o 00:04:03.316 CXX test/cpp_headers/dif.o 00:04:03.316 LINK aer 00:04:03.316 LINK reset 00:04:03.316 CC test/nvme/overhead/overhead.o 00:04:03.316 LINK idxd_perf 00:04:03.316 LINK sgl 00:04:03.316 LINK spdk_nvme_identify 00:04:03.316 CXX test/cpp_headers/dma.o 00:04:03.316 CXX test/cpp_headers/endian.o 00:04:03.574 CXX test/cpp_headers/env_dpdk.o 00:04:03.574 LINK nvme_dp 00:04:03.574 CXX test/cpp_headers/env.o 00:04:03.574 LINK iscsi_fuzz 00:04:03.574 CXX test/cpp_headers/event.o 00:04:03.574 LINK overhead 00:04:03.574 CC app/spdk_top/spdk_top.o 00:04:03.574 CC examples/fsdev/hello_world/hello_fsdev.o 00:04:03.574 CC test/nvme/err_injection/err_injection.o 00:04:03.833 CC test/nvme/startup/startup.o 00:04:03.833 CXX test/cpp_headers/fd_group.o 00:04:03.833 CC test/bdev/bdevio/bdevio.o 00:04:03.833 CC test/nvme/reserve/reserve.o 00:04:03.833 CC test/nvme/simple_copy/simple_copy.o 00:04:03.833 CC test/app/histogram_perf/histogram_perf.o 00:04:03.833 LINK err_injection 00:04:03.833 CXX test/cpp_headers/fd.o 00:04:03.833 LINK startup 00:04:03.833 LINK hello_fsdev 00:04:04.092 LINK reserve 00:04:04.093 LINK histogram_perf 00:04:04.093 CXX test/cpp_headers/file.o 00:04:04.093 CC test/app/jsoncat/jsoncat.o 00:04:04.093 LINK simple_copy 00:04:04.093 LINK bdevio 00:04:04.093 CXX test/cpp_headers/fsdev.o 00:04:04.093 CC test/nvme/connect_stress/connect_stress.o 00:04:04.093 CXX test/cpp_headers/fsdev_module.o 00:04:04.351 CC test/nvme/boot_partition/boot_partition.o 00:04:04.351 LINK jsoncat 00:04:04.351 CC examples/accel/perf/accel_perf.o 00:04:04.351 CXX test/cpp_headers/ftl.o 00:04:04.351 LINK connect_stress 00:04:04.351 CC test/nvme/compliance/nvme_compliance.o 00:04:04.351 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:04.351 CC test/nvme/fused_ordering/fused_ordering.o 00:04:04.351 LINK boot_partition 00:04:04.610 CC test/app/stub/stub.o 00:04:04.610 CXX test/cpp_headers/gpt_spec.o 00:04:04.610 CXX test/cpp_headers/hexlify.o 00:04:04.610 LINK spdk_top 00:04:04.610 LINK fused_ordering 00:04:04.610 LINK doorbell_aers 00:04:04.610 CXX test/cpp_headers/histogram_data.o 00:04:04.610 LINK stub 00:04:04.868 CC test/nvme/fdp/fdp.o 00:04:04.868 LINK nvme_compliance 00:04:04.868 CXX test/cpp_headers/idxd.o 00:04:04.868 CXX test/cpp_headers/idxd_spec.o 00:04:04.868 CC test/nvme/cuse/cuse.o 00:04:04.868 LINK accel_perf 00:04:04.868 CC app/vhost/vhost.o 00:04:04.868 CC examples/nvme/hello_world/hello_world.o 00:04:04.868 CXX test/cpp_headers/init.o 00:04:05.125 CC examples/blob/hello_world/hello_blob.o 00:04:05.125 CXX test/cpp_headers/ioat.o 00:04:05.125 CC examples/nvme/reconnect/reconnect.o 00:04:05.125 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:05.125 LINK vhost 00:04:05.125 LINK fdp 00:04:05.125 CXX test/cpp_headers/ioat_spec.o 00:04:05.125 LINK hello_world 00:04:05.125 LINK hello_blob 00:04:05.383 CXX test/cpp_headers/iscsi_spec.o 00:04:05.383 CC examples/blob/cli/blobcli.o 00:04:05.383 CC app/spdk_dd/spdk_dd.o 00:04:05.383 LINK reconnect 00:04:05.383 CXX test/cpp_headers/json.o 00:04:05.383 CC examples/nvme/arbitration/arbitration.o 00:04:05.644 CC app/fio/nvme/fio_plugin.o 00:04:05.644 CXX test/cpp_headers/jsonrpc.o 00:04:05.644 CC examples/bdev/hello_world/hello_bdev.o 00:04:05.644 CC examples/nvme/hotplug/hotplug.o 00:04:05.644 LINK nvme_manage 00:04:05.644 LINK spdk_dd 00:04:05.644 CXX test/cpp_headers/keyring.o 00:04:05.903 LINK arbitration 00:04:05.903 LINK blobcli 00:04:05.903 LINK hello_bdev 00:04:05.903 CXX test/cpp_headers/keyring_module.o 00:04:05.903 LINK hotplug 00:04:05.903 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:06.162 CXX test/cpp_headers/likely.o 00:04:06.162 CXX test/cpp_headers/log.o 00:04:06.162 CC examples/nvme/abort/abort.o 00:04:06.162 CC examples/bdev/bdevperf/bdevperf.o 00:04:06.162 LINK spdk_nvme 00:04:06.162 LINK cmb_copy 00:04:06.162 CC app/fio/bdev/fio_plugin.o 00:04:06.162 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:06.162 LINK cuse 00:04:06.162 CXX test/cpp_headers/lvol.o 00:04:06.162 CXX test/cpp_headers/md5.o 00:04:06.162 CXX test/cpp_headers/memory.o 00:04:06.421 CXX test/cpp_headers/mmio.o 00:04:06.421 CXX test/cpp_headers/nbd.o 00:04:06.421 LINK pmr_persistence 00:04:06.421 CXX test/cpp_headers/net.o 00:04:06.421 CXX test/cpp_headers/notify.o 00:04:06.421 CXX test/cpp_headers/nvme.o 00:04:06.421 CXX test/cpp_headers/nvme_intel.o 00:04:06.421 CXX test/cpp_headers/nvme_ocssd.o 00:04:06.421 LINK abort 00:04:06.421 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:06.421 CXX test/cpp_headers/nvme_spec.o 00:04:06.680 CXX test/cpp_headers/nvme_zns.o 00:04:06.680 CXX test/cpp_headers/nvmf_cmd.o 00:04:06.680 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:06.680 CXX test/cpp_headers/nvmf.o 00:04:06.680 CXX test/cpp_headers/nvmf_spec.o 00:04:06.680 LINK spdk_bdev 00:04:06.680 CXX test/cpp_headers/nvmf_transport.o 00:04:06.680 CXX test/cpp_headers/opal.o 00:04:06.680 CXX test/cpp_headers/opal_spec.o 00:04:06.680 CXX test/cpp_headers/pci_ids.o 00:04:06.680 CXX test/cpp_headers/pipe.o 00:04:06.680 CXX test/cpp_headers/queue.o 00:04:06.680 CXX test/cpp_headers/reduce.o 00:04:06.680 CXX test/cpp_headers/rpc.o 00:04:06.939 CXX test/cpp_headers/scheduler.o 00:04:06.939 CXX test/cpp_headers/scsi.o 00:04:06.939 CXX test/cpp_headers/scsi_spec.o 00:04:06.939 CXX test/cpp_headers/sock.o 00:04:06.939 CXX test/cpp_headers/stdinc.o 00:04:06.939 CXX test/cpp_headers/string.o 00:04:06.939 CXX test/cpp_headers/thread.o 00:04:06.939 CXX test/cpp_headers/trace.o 00:04:06.939 CXX test/cpp_headers/trace_parser.o 00:04:06.939 CXX test/cpp_headers/tree.o 00:04:06.939 CXX test/cpp_headers/ublk.o 00:04:07.198 CXX test/cpp_headers/util.o 00:04:07.198 CXX test/cpp_headers/uuid.o 00:04:07.198 CXX test/cpp_headers/version.o 00:04:07.198 CXX test/cpp_headers/vfio_user_pci.o 00:04:07.198 CXX test/cpp_headers/vfio_user_spec.o 00:04:07.198 CXX test/cpp_headers/vhost.o 00:04:07.198 CXX test/cpp_headers/vmd.o 00:04:07.198 LINK bdevperf 00:04:07.198 CXX test/cpp_headers/xor.o 00:04:07.198 CXX test/cpp_headers/zipf.o 00:04:07.766 CC examples/nvmf/nvmf/nvmf.o 00:04:08.333 LINK nvmf 00:04:09.270 LINK esnap 00:04:09.528 00:04:09.528 real 1m19.040s 00:04:09.528 user 6m6.828s 00:04:09.528 sys 1m14.566s 00:04:09.528 18:35:09 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:04:09.528 18:35:09 make -- common/autotest_common.sh@10 -- $ set +x 00:04:09.528 ************************************ 00:04:09.528 END TEST make 00:04:09.528 ************************************ 00:04:09.787 18:35:10 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:09.787 18:35:10 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:09.787 18:35:10 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:09.787 18:35:10 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:09.787 18:35:10 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:04:09.787 18:35:10 -- pm/common@44 -- $ pid=6199 00:04:09.787 18:35:10 -- pm/common@50 -- $ kill -TERM 6199 00:04:09.787 18:35:10 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:09.787 18:35:10 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:04:09.787 18:35:10 -- pm/common@44 -- $ pid=6201 00:04:09.787 18:35:10 -- pm/common@50 -- $ kill -TERM 6201 00:04:09.787 18:35:10 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:04:09.787 18:35:10 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:04:09.788 18:35:10 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:09.788 18:35:10 -- common/autotest_common.sh@1711 -- # lcov --version 00:04:09.788 18:35:10 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:09.788 18:35:10 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:09.788 18:35:10 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:09.788 18:35:10 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:10.047 18:35:10 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:10.047 18:35:10 -- scripts/common.sh@336 -- # IFS=.-: 00:04:10.047 18:35:10 -- scripts/common.sh@336 -- # read -ra ver1 00:04:10.047 18:35:10 -- scripts/common.sh@337 -- # IFS=.-: 00:04:10.047 18:35:10 -- scripts/common.sh@337 -- # read -ra ver2 00:04:10.047 18:35:10 -- scripts/common.sh@338 -- # local 'op=<' 00:04:10.047 18:35:10 -- scripts/common.sh@340 -- # ver1_l=2 00:04:10.047 18:35:10 -- scripts/common.sh@341 -- # ver2_l=1 00:04:10.047 18:35:10 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:10.047 18:35:10 -- scripts/common.sh@344 -- # case "$op" in 00:04:10.047 18:35:10 -- scripts/common.sh@345 -- # : 1 00:04:10.047 18:35:10 -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:10.047 18:35:10 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:10.047 18:35:10 -- scripts/common.sh@365 -- # decimal 1 00:04:10.047 18:35:10 -- scripts/common.sh@353 -- # local d=1 00:04:10.047 18:35:10 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:10.047 18:35:10 -- scripts/common.sh@355 -- # echo 1 00:04:10.047 18:35:10 -- scripts/common.sh@365 -- # ver1[v]=1 00:04:10.047 18:35:10 -- scripts/common.sh@366 -- # decimal 2 00:04:10.047 18:35:10 -- scripts/common.sh@353 -- # local d=2 00:04:10.047 18:35:10 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:10.047 18:35:10 -- scripts/common.sh@355 -- # echo 2 00:04:10.047 18:35:10 -- scripts/common.sh@366 -- # ver2[v]=2 00:04:10.047 18:35:10 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:10.047 18:35:10 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:10.047 18:35:10 -- scripts/common.sh@368 -- # return 0 00:04:10.047 18:35:10 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:10.047 18:35:10 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:10.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:10.047 --rc genhtml_branch_coverage=1 00:04:10.047 --rc genhtml_function_coverage=1 00:04:10.047 --rc genhtml_legend=1 00:04:10.047 --rc geninfo_all_blocks=1 00:04:10.047 --rc geninfo_unexecuted_blocks=1 00:04:10.047 00:04:10.047 ' 00:04:10.047 18:35:10 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:10.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:10.047 --rc genhtml_branch_coverage=1 00:04:10.047 --rc genhtml_function_coverage=1 00:04:10.047 --rc genhtml_legend=1 00:04:10.047 --rc geninfo_all_blocks=1 00:04:10.047 --rc geninfo_unexecuted_blocks=1 00:04:10.047 00:04:10.047 ' 00:04:10.047 18:35:10 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:10.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:10.047 --rc genhtml_branch_coverage=1 00:04:10.047 --rc genhtml_function_coverage=1 00:04:10.047 --rc genhtml_legend=1 00:04:10.047 --rc geninfo_all_blocks=1 00:04:10.047 --rc geninfo_unexecuted_blocks=1 00:04:10.047 00:04:10.047 ' 00:04:10.047 18:35:10 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:10.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:10.047 --rc genhtml_branch_coverage=1 00:04:10.047 --rc genhtml_function_coverage=1 00:04:10.047 --rc genhtml_legend=1 00:04:10.047 --rc geninfo_all_blocks=1 00:04:10.047 --rc geninfo_unexecuted_blocks=1 00:04:10.047 00:04:10.047 ' 00:04:10.047 18:35:10 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:10.047 18:35:10 -- nvmf/common.sh@7 -- # uname -s 00:04:10.047 18:35:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:10.047 18:35:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:10.047 18:35:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:10.047 18:35:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:10.047 18:35:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:10.047 18:35:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:10.047 18:35:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:10.047 18:35:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:10.047 18:35:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:10.047 18:35:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:10.047 18:35:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f6060331-514e-448e-9fbd-57198c1fa4b2 00:04:10.047 18:35:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=f6060331-514e-448e-9fbd-57198c1fa4b2 00:04:10.047 18:35:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:10.047 18:35:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:10.047 18:35:10 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:10.047 18:35:10 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:10.047 18:35:10 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:10.047 18:35:10 -- scripts/common.sh@15 -- # shopt -s extglob 00:04:10.047 18:35:10 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:10.047 18:35:10 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:10.047 18:35:10 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:10.047 18:35:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:10.047 18:35:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:10.047 18:35:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:10.047 18:35:10 -- paths/export.sh@5 -- # export PATH 00:04:10.047 18:35:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:10.047 18:35:10 -- nvmf/common.sh@51 -- # : 0 00:04:10.047 18:35:10 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:10.047 18:35:10 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:10.047 18:35:10 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:10.047 18:35:10 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:10.047 18:35:10 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:10.047 18:35:10 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:10.047 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:10.047 18:35:10 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:10.047 18:35:10 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:10.047 18:35:10 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:10.047 18:35:10 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:10.047 18:35:10 -- spdk/autotest.sh@32 -- # uname -s 00:04:10.047 18:35:10 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:10.047 18:35:10 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:10.047 18:35:10 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:10.047 18:35:10 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:10.047 18:35:10 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:10.047 18:35:10 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:10.047 18:35:10 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:10.047 18:35:10 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:10.047 18:35:10 -- spdk/autotest.sh@48 -- # udevadm_pid=68597 00:04:10.047 18:35:10 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:10.047 18:35:10 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:10.047 18:35:10 -- pm/common@17 -- # local monitor 00:04:10.047 18:35:10 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:10.047 18:35:10 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:10.047 18:35:10 -- pm/common@25 -- # sleep 1 00:04:10.047 18:35:10 -- pm/common@21 -- # date +%s 00:04:10.047 18:35:10 -- pm/common@21 -- # date +%s 00:04:10.047 18:35:10 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1734287710 00:04:10.047 18:35:10 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1734287710 00:04:10.047 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1734287710_collect-vmstat.pm.log 00:04:10.047 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1734287710_collect-cpu-load.pm.log 00:04:10.982 18:35:11 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:10.982 18:35:11 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:10.982 18:35:11 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:10.982 18:35:11 -- common/autotest_common.sh@10 -- # set +x 00:04:10.982 18:35:11 -- spdk/autotest.sh@59 -- # create_test_list 00:04:10.982 18:35:11 -- common/autotest_common.sh@752 -- # xtrace_disable 00:04:10.982 18:35:11 -- common/autotest_common.sh@10 -- # set +x 00:04:11.241 18:35:11 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:11.241 18:35:11 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:11.241 18:35:11 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:04:11.241 18:35:11 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:11.241 18:35:11 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:04:11.241 18:35:11 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:11.241 18:35:11 -- common/autotest_common.sh@1457 -- # uname 00:04:11.242 18:35:11 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:04:11.242 18:35:11 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:11.242 18:35:11 -- common/autotest_common.sh@1477 -- # uname 00:04:11.242 18:35:11 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:04:11.242 18:35:11 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:04:11.242 18:35:11 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:11.242 lcov: LCOV version 1.15 00:04:11.242 18:35:11 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:26.129 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:26.129 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:04:41.029 18:35:40 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:41.029 18:35:40 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:41.029 18:35:40 -- common/autotest_common.sh@10 -- # set +x 00:04:41.029 18:35:40 -- spdk/autotest.sh@78 -- # rm -f 00:04:41.029 18:35:40 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:41.029 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:41.029 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:04:41.029 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:04:41.029 18:35:41 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:41.029 18:35:41 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:04:41.029 18:35:41 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:04:41.029 18:35:41 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:04:41.029 18:35:41 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:04:41.029 18:35:41 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:04:41.029 18:35:41 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:04:41.029 18:35:41 -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:04:41.029 18:35:41 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:04:41.029 18:35:41 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:04:41.029 18:35:41 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:04:41.029 18:35:41 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:41.029 18:35:41 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:41.029 18:35:41 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:04:41.029 18:35:41 -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:04:41.029 18:35:41 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:04:41.029 18:35:41 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:04:41.029 18:35:41 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:04:41.029 18:35:41 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:41.029 18:35:41 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:41.029 18:35:41 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:04:41.029 18:35:41 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n2 00:04:41.029 18:35:41 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:04:41.029 18:35:41 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:41.029 18:35:41 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:41.029 18:35:41 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:04:41.029 18:35:41 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n3 00:04:41.029 18:35:41 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:04:41.029 18:35:41 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:41.030 18:35:41 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:41.030 18:35:41 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:41.030 18:35:41 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:41.030 18:35:41 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:41.030 18:35:41 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:41.030 18:35:41 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:41.030 18:35:41 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:41.030 No valid GPT data, bailing 00:04:41.030 18:35:41 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:41.030 18:35:41 -- scripts/common.sh@394 -- # pt= 00:04:41.030 18:35:41 -- scripts/common.sh@395 -- # return 1 00:04:41.030 18:35:41 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:41.030 1+0 records in 00:04:41.030 1+0 records out 00:04:41.030 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00595962 s, 176 MB/s 00:04:41.030 18:35:41 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:41.030 18:35:41 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:41.030 18:35:41 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:04:41.030 18:35:41 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:04:41.030 18:35:41 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:41.289 No valid GPT data, bailing 00:04:41.289 18:35:41 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:41.289 18:35:41 -- scripts/common.sh@394 -- # pt= 00:04:41.289 18:35:41 -- scripts/common.sh@395 -- # return 1 00:04:41.289 18:35:41 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:41.289 1+0 records in 00:04:41.289 1+0 records out 00:04:41.289 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00474981 s, 221 MB/s 00:04:41.289 18:35:41 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:41.289 18:35:41 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:41.289 18:35:41 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:04:41.289 18:35:41 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:04:41.289 18:35:41 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:04:41.289 No valid GPT data, bailing 00:04:41.289 18:35:41 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:04:41.289 18:35:41 -- scripts/common.sh@394 -- # pt= 00:04:41.289 18:35:41 -- scripts/common.sh@395 -- # return 1 00:04:41.289 18:35:41 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:04:41.289 1+0 records in 00:04:41.289 1+0 records out 00:04:41.289 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00511712 s, 205 MB/s 00:04:41.289 18:35:41 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:41.289 18:35:41 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:41.289 18:35:41 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:04:41.289 18:35:41 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:04:41.289 18:35:41 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:04:41.289 No valid GPT data, bailing 00:04:41.289 18:35:41 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:04:41.289 18:35:41 -- scripts/common.sh@394 -- # pt= 00:04:41.289 18:35:41 -- scripts/common.sh@395 -- # return 1 00:04:41.289 18:35:41 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:04:41.289 1+0 records in 00:04:41.289 1+0 records out 00:04:41.289 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00584066 s, 180 MB/s 00:04:41.289 18:35:41 -- spdk/autotest.sh@105 -- # sync 00:04:41.548 18:35:41 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:41.548 18:35:41 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:41.548 18:35:41 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:44.090 18:35:44 -- spdk/autotest.sh@111 -- # uname -s 00:04:44.090 18:35:44 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:44.090 18:35:44 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:44.090 18:35:44 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:45.116 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:45.116 Hugepages 00:04:45.116 node hugesize free / total 00:04:45.116 node0 1048576kB 0 / 0 00:04:45.116 node0 2048kB 0 / 0 00:04:45.116 00:04:45.116 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:45.116 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:45.116 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:45.376 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:04:45.376 18:35:45 -- spdk/autotest.sh@117 -- # uname -s 00:04:45.376 18:35:45 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:45.376 18:35:45 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:45.376 18:35:45 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:45.945 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:46.205 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:46.205 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:46.205 18:35:46 -- common/autotest_common.sh@1517 -- # sleep 1 00:04:47.587 18:35:47 -- common/autotest_common.sh@1518 -- # bdfs=() 00:04:47.587 18:35:47 -- common/autotest_common.sh@1518 -- # local bdfs 00:04:47.587 18:35:47 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:04:47.587 18:35:47 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:04:47.587 18:35:47 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:47.587 18:35:47 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:47.587 18:35:47 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:47.587 18:35:47 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:47.587 18:35:47 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:47.587 18:35:47 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:04:47.587 18:35:47 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:47.587 18:35:47 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:47.846 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:47.846 Waiting for block devices as requested 00:04:47.846 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:48.106 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:48.106 18:35:48 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:48.106 18:35:48 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:48.106 18:35:48 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:48.106 18:35:48 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:04:48.106 18:35:48 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:48.106 18:35:48 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:48.106 18:35:48 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:48.106 18:35:48 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:04:48.106 18:35:48 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:04:48.106 18:35:48 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:04:48.106 18:35:48 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:04:48.106 18:35:48 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:48.106 18:35:48 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:48.106 18:35:48 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:48.106 18:35:48 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:48.106 18:35:48 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:48.106 18:35:48 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:04:48.106 18:35:48 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:48.106 18:35:48 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:48.106 18:35:48 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:48.106 18:35:48 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:48.106 18:35:48 -- common/autotest_common.sh@1543 -- # continue 00:04:48.106 18:35:48 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:48.106 18:35:48 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:48.106 18:35:48 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:48.106 18:35:48 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:04:48.106 18:35:48 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:48.106 18:35:48 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:48.106 18:35:48 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:48.106 18:35:48 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:04:48.106 18:35:48 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:04:48.106 18:35:48 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:04:48.106 18:35:48 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:48.106 18:35:48 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:04:48.106 18:35:48 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:48.106 18:35:48 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:48.106 18:35:48 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:48.106 18:35:48 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:48.106 18:35:48 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:48.106 18:35:48 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:48.106 18:35:48 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:48.106 18:35:48 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:48.107 18:35:48 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:48.107 18:35:48 -- common/autotest_common.sh@1543 -- # continue 00:04:48.107 18:35:48 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:48.107 18:35:48 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:48.107 18:35:48 -- common/autotest_common.sh@10 -- # set +x 00:04:48.366 18:35:48 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:48.366 18:35:48 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:48.366 18:35:48 -- common/autotest_common.sh@10 -- # set +x 00:04:48.366 18:35:48 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:49.305 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:49.305 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:49.305 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:49.305 18:35:49 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:49.305 18:35:49 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:49.305 18:35:49 -- common/autotest_common.sh@10 -- # set +x 00:04:49.305 18:35:49 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:49.305 18:35:49 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:04:49.305 18:35:49 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:04:49.305 18:35:49 -- common/autotest_common.sh@1563 -- # bdfs=() 00:04:49.305 18:35:49 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:04:49.305 18:35:49 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:04:49.305 18:35:49 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:04:49.305 18:35:49 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:49.305 18:35:49 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:49.305 18:35:49 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:49.305 18:35:49 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:49.305 18:35:49 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:49.305 18:35:49 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:49.565 18:35:49 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:04:49.565 18:35:49 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:49.565 18:35:49 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:49.565 18:35:49 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:49.565 18:35:49 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:49.565 18:35:49 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:49.565 18:35:49 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:49.565 18:35:49 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:49.565 18:35:49 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:49.565 18:35:49 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:49.565 18:35:49 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:04:49.565 18:35:49 -- common/autotest_common.sh@1572 -- # return 0 00:04:49.565 18:35:49 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:04:49.565 18:35:49 -- common/autotest_common.sh@1580 -- # return 0 00:04:49.565 18:35:49 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:49.565 18:35:49 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:49.565 18:35:49 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:49.565 18:35:49 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:49.565 18:35:49 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:49.565 18:35:49 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:49.565 18:35:49 -- common/autotest_common.sh@10 -- # set +x 00:04:49.565 18:35:49 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:49.565 18:35:49 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:49.565 18:35:49 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:49.565 18:35:49 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:49.565 18:35:49 -- common/autotest_common.sh@10 -- # set +x 00:04:49.565 ************************************ 00:04:49.565 START TEST env 00:04:49.565 ************************************ 00:04:49.565 18:35:49 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:49.565 * Looking for test storage... 00:04:49.565 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:49.565 18:35:49 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:49.565 18:35:49 env -- common/autotest_common.sh@1711 -- # lcov --version 00:04:49.565 18:35:49 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:49.565 18:35:50 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:49.565 18:35:50 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:49.565 18:35:50 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:49.565 18:35:50 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:49.565 18:35:50 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:49.565 18:35:50 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:49.565 18:35:50 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:49.824 18:35:50 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:49.824 18:35:50 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:49.824 18:35:50 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:49.824 18:35:50 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:49.824 18:35:50 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:49.824 18:35:50 env -- scripts/common.sh@344 -- # case "$op" in 00:04:49.825 18:35:50 env -- scripts/common.sh@345 -- # : 1 00:04:49.825 18:35:50 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:49.825 18:35:50 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:49.825 18:35:50 env -- scripts/common.sh@365 -- # decimal 1 00:04:49.825 18:35:50 env -- scripts/common.sh@353 -- # local d=1 00:04:49.825 18:35:50 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:49.825 18:35:50 env -- scripts/common.sh@355 -- # echo 1 00:04:49.825 18:35:50 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:49.825 18:35:50 env -- scripts/common.sh@366 -- # decimal 2 00:04:49.825 18:35:50 env -- scripts/common.sh@353 -- # local d=2 00:04:49.825 18:35:50 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:49.825 18:35:50 env -- scripts/common.sh@355 -- # echo 2 00:04:49.825 18:35:50 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:49.825 18:35:50 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:49.825 18:35:50 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:49.825 18:35:50 env -- scripts/common.sh@368 -- # return 0 00:04:49.825 18:35:50 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:49.825 18:35:50 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:49.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.825 --rc genhtml_branch_coverage=1 00:04:49.825 --rc genhtml_function_coverage=1 00:04:49.825 --rc genhtml_legend=1 00:04:49.825 --rc geninfo_all_blocks=1 00:04:49.825 --rc geninfo_unexecuted_blocks=1 00:04:49.825 00:04:49.825 ' 00:04:49.825 18:35:50 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:49.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.825 --rc genhtml_branch_coverage=1 00:04:49.825 --rc genhtml_function_coverage=1 00:04:49.825 --rc genhtml_legend=1 00:04:49.825 --rc geninfo_all_blocks=1 00:04:49.825 --rc geninfo_unexecuted_blocks=1 00:04:49.825 00:04:49.825 ' 00:04:49.825 18:35:50 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:49.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.825 --rc genhtml_branch_coverage=1 00:04:49.825 --rc genhtml_function_coverage=1 00:04:49.825 --rc genhtml_legend=1 00:04:49.825 --rc geninfo_all_blocks=1 00:04:49.825 --rc geninfo_unexecuted_blocks=1 00:04:49.825 00:04:49.825 ' 00:04:49.825 18:35:50 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:49.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.825 --rc genhtml_branch_coverage=1 00:04:49.825 --rc genhtml_function_coverage=1 00:04:49.825 --rc genhtml_legend=1 00:04:49.825 --rc geninfo_all_blocks=1 00:04:49.825 --rc geninfo_unexecuted_blocks=1 00:04:49.825 00:04:49.825 ' 00:04:49.825 18:35:50 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:49.825 18:35:50 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:49.825 18:35:50 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:49.825 18:35:50 env -- common/autotest_common.sh@10 -- # set +x 00:04:49.825 ************************************ 00:04:49.825 START TEST env_memory 00:04:49.825 ************************************ 00:04:49.825 18:35:50 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:49.825 00:04:49.825 00:04:49.825 CUnit - A unit testing framework for C - Version 2.1-3 00:04:49.825 http://cunit.sourceforge.net/ 00:04:49.825 00:04:49.825 00:04:49.825 Suite: memory 00:04:49.825 Test: alloc and free memory map ...[2024-12-15 18:35:50.125299] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:49.825 passed 00:04:49.825 Test: mem map translation ...[2024-12-15 18:35:50.173307] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:49.825 [2024-12-15 18:35:50.173406] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:49.825 [2024-12-15 18:35:50.173494] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:49.825 [2024-12-15 18:35:50.173519] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:49.825 passed 00:04:49.825 Test: mem map registration ...[2024-12-15 18:35:50.246872] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:49.825 [2024-12-15 18:35:50.246946] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:50.084 passed 00:04:50.084 Test: mem map adjacent registrations ...passed 00:04:50.084 00:04:50.084 Run Summary: Type Total Ran Passed Failed Inactive 00:04:50.084 suites 1 1 n/a 0 0 00:04:50.084 tests 4 4 4 0 0 00:04:50.084 asserts 152 152 152 0 n/a 00:04:50.084 00:04:50.084 Elapsed time = 0.280 seconds 00:04:50.084 00:04:50.084 real 0m0.332s 00:04:50.084 user 0m0.300s 00:04:50.084 sys 0m0.017s 00:04:50.084 18:35:50 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:50.084 18:35:50 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:50.084 ************************************ 00:04:50.084 END TEST env_memory 00:04:50.084 ************************************ 00:04:50.084 18:35:50 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:50.084 18:35:50 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:50.084 18:35:50 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:50.084 18:35:50 env -- common/autotest_common.sh@10 -- # set +x 00:04:50.084 ************************************ 00:04:50.084 START TEST env_vtophys 00:04:50.084 ************************************ 00:04:50.084 18:35:50 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:50.084 EAL: lib.eal log level changed from notice to debug 00:04:50.084 EAL: Detected lcore 0 as core 0 on socket 0 00:04:50.084 EAL: Detected lcore 1 as core 0 on socket 0 00:04:50.085 EAL: Detected lcore 2 as core 0 on socket 0 00:04:50.085 EAL: Detected lcore 3 as core 0 on socket 0 00:04:50.085 EAL: Detected lcore 4 as core 0 on socket 0 00:04:50.085 EAL: Detected lcore 5 as core 0 on socket 0 00:04:50.085 EAL: Detected lcore 6 as core 0 on socket 0 00:04:50.085 EAL: Detected lcore 7 as core 0 on socket 0 00:04:50.085 EAL: Detected lcore 8 as core 0 on socket 0 00:04:50.085 EAL: Detected lcore 9 as core 0 on socket 0 00:04:50.085 EAL: Maximum logical cores by configuration: 128 00:04:50.085 EAL: Detected CPU lcores: 10 00:04:50.085 EAL: Detected NUMA nodes: 1 00:04:50.085 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:04:50.085 EAL: Detected shared linkage of DPDK 00:04:50.085 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24.0 00:04:50.085 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24.0 00:04:50.085 EAL: Registered [vdev] bus. 00:04:50.085 EAL: bus.vdev log level changed from disabled to notice 00:04:50.085 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24.0 00:04:50.085 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24.0 00:04:50.085 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:04:50.085 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:04:50.085 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:04:50.085 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:04:50.085 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:04:50.085 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:04:50.085 EAL: No shared files mode enabled, IPC will be disabled 00:04:50.085 EAL: No shared files mode enabled, IPC is disabled 00:04:50.085 EAL: Selected IOVA mode 'PA' 00:04:50.085 EAL: Probing VFIO support... 00:04:50.085 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:50.085 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:50.085 EAL: Ask a virtual area of 0x2e000 bytes 00:04:50.085 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:50.085 EAL: Setting up physically contiguous memory... 00:04:50.085 EAL: Setting maximum number of open files to 524288 00:04:50.085 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:50.085 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:50.085 EAL: Ask a virtual area of 0x61000 bytes 00:04:50.085 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:50.085 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:50.085 EAL: Ask a virtual area of 0x400000000 bytes 00:04:50.085 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:50.085 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:50.085 EAL: Ask a virtual area of 0x61000 bytes 00:04:50.085 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:50.085 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:50.085 EAL: Ask a virtual area of 0x400000000 bytes 00:04:50.085 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:50.085 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:50.085 EAL: Ask a virtual area of 0x61000 bytes 00:04:50.085 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:50.085 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:50.085 EAL: Ask a virtual area of 0x400000000 bytes 00:04:50.085 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:50.085 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:50.085 EAL: Ask a virtual area of 0x61000 bytes 00:04:50.085 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:50.085 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:50.085 EAL: Ask a virtual area of 0x400000000 bytes 00:04:50.085 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:50.085 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:50.085 EAL: Hugepages will be freed exactly as allocated. 00:04:50.085 EAL: No shared files mode enabled, IPC is disabled 00:04:50.085 EAL: No shared files mode enabled, IPC is disabled 00:04:50.344 EAL: TSC frequency is ~2290000 KHz 00:04:50.344 EAL: Main lcore 0 is ready (tid=7f0933738a40;cpuset=[0]) 00:04:50.344 EAL: Trying to obtain current memory policy. 00:04:50.344 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:50.344 EAL: Restoring previous memory policy: 0 00:04:50.344 EAL: request: mp_malloc_sync 00:04:50.344 EAL: No shared files mode enabled, IPC is disabled 00:04:50.344 EAL: Heap on socket 0 was expanded by 2MB 00:04:50.344 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:50.344 EAL: No shared files mode enabled, IPC is disabled 00:04:50.344 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:50.344 EAL: Mem event callback 'spdk:(nil)' registered 00:04:50.344 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:50.344 00:04:50.344 00:04:50.344 CUnit - A unit testing framework for C - Version 2.1-3 00:04:50.344 http://cunit.sourceforge.net/ 00:04:50.344 00:04:50.344 00:04:50.344 Suite: components_suite 00:04:50.604 Test: vtophys_malloc_test ...passed 00:04:50.604 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:50.604 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:50.604 EAL: Restoring previous memory policy: 4 00:04:50.604 EAL: Calling mem event callback 'spdk:(nil)' 00:04:50.604 EAL: request: mp_malloc_sync 00:04:50.604 EAL: No shared files mode enabled, IPC is disabled 00:04:50.604 EAL: Heap on socket 0 was expanded by 4MB 00:04:50.604 EAL: Calling mem event callback 'spdk:(nil)' 00:04:50.604 EAL: request: mp_malloc_sync 00:04:50.604 EAL: No shared files mode enabled, IPC is disabled 00:04:50.604 EAL: Heap on socket 0 was shrunk by 4MB 00:04:50.604 EAL: Trying to obtain current memory policy. 00:04:50.604 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:50.604 EAL: Restoring previous memory policy: 4 00:04:50.604 EAL: Calling mem event callback 'spdk:(nil)' 00:04:50.604 EAL: request: mp_malloc_sync 00:04:50.604 EAL: No shared files mode enabled, IPC is disabled 00:04:50.604 EAL: Heap on socket 0 was expanded by 6MB 00:04:50.604 EAL: Calling mem event callback 'spdk:(nil)' 00:04:50.604 EAL: request: mp_malloc_sync 00:04:50.604 EAL: No shared files mode enabled, IPC is disabled 00:04:50.604 EAL: Heap on socket 0 was shrunk by 6MB 00:04:50.604 EAL: Trying to obtain current memory policy. 00:04:50.604 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:50.604 EAL: Restoring previous memory policy: 4 00:04:50.604 EAL: Calling mem event callback 'spdk:(nil)' 00:04:50.604 EAL: request: mp_malloc_sync 00:04:50.604 EAL: No shared files mode enabled, IPC is disabled 00:04:50.604 EAL: Heap on socket 0 was expanded by 10MB 00:04:50.604 EAL: Calling mem event callback 'spdk:(nil)' 00:04:50.604 EAL: request: mp_malloc_sync 00:04:50.604 EAL: No shared files mode enabled, IPC is disabled 00:04:50.604 EAL: Heap on socket 0 was shrunk by 10MB 00:04:50.604 EAL: Trying to obtain current memory policy. 00:04:50.604 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:50.604 EAL: Restoring previous memory policy: 4 00:04:50.604 EAL: Calling mem event callback 'spdk:(nil)' 00:04:50.604 EAL: request: mp_malloc_sync 00:04:50.604 EAL: No shared files mode enabled, IPC is disabled 00:04:50.604 EAL: Heap on socket 0 was expanded by 18MB 00:04:50.604 EAL: Calling mem event callback 'spdk:(nil)' 00:04:50.863 EAL: request: mp_malloc_sync 00:04:50.863 EAL: No shared files mode enabled, IPC is disabled 00:04:50.863 EAL: Heap on socket 0 was shrunk by 18MB 00:04:50.863 EAL: Trying to obtain current memory policy. 00:04:50.863 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:50.863 EAL: Restoring previous memory policy: 4 00:04:50.863 EAL: Calling mem event callback 'spdk:(nil)' 00:04:50.863 EAL: request: mp_malloc_sync 00:04:50.863 EAL: No shared files mode enabled, IPC is disabled 00:04:50.863 EAL: Heap on socket 0 was expanded by 34MB 00:04:50.863 EAL: Calling mem event callback 'spdk:(nil)' 00:04:50.863 EAL: request: mp_malloc_sync 00:04:50.863 EAL: No shared files mode enabled, IPC is disabled 00:04:50.863 EAL: Heap on socket 0 was shrunk by 34MB 00:04:50.863 EAL: Trying to obtain current memory policy. 00:04:50.863 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:50.863 EAL: Restoring previous memory policy: 4 00:04:50.863 EAL: Calling mem event callback 'spdk:(nil)' 00:04:50.863 EAL: request: mp_malloc_sync 00:04:50.863 EAL: No shared files mode enabled, IPC is disabled 00:04:50.863 EAL: Heap on socket 0 was expanded by 66MB 00:04:50.863 EAL: Calling mem event callback 'spdk:(nil)' 00:04:50.863 EAL: request: mp_malloc_sync 00:04:50.863 EAL: No shared files mode enabled, IPC is disabled 00:04:50.863 EAL: Heap on socket 0 was shrunk by 66MB 00:04:50.863 EAL: Trying to obtain current memory policy. 00:04:50.863 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:50.863 EAL: Restoring previous memory policy: 4 00:04:50.863 EAL: Calling mem event callback 'spdk:(nil)' 00:04:50.863 EAL: request: mp_malloc_sync 00:04:50.863 EAL: No shared files mode enabled, IPC is disabled 00:04:50.864 EAL: Heap on socket 0 was expanded by 130MB 00:04:50.864 EAL: Calling mem event callback 'spdk:(nil)' 00:04:50.864 EAL: request: mp_malloc_sync 00:04:50.864 EAL: No shared files mode enabled, IPC is disabled 00:04:50.864 EAL: Heap on socket 0 was shrunk by 130MB 00:04:50.864 EAL: Trying to obtain current memory policy. 00:04:50.864 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:50.864 EAL: Restoring previous memory policy: 4 00:04:50.864 EAL: Calling mem event callback 'spdk:(nil)' 00:04:50.864 EAL: request: mp_malloc_sync 00:04:50.864 EAL: No shared files mode enabled, IPC is disabled 00:04:50.864 EAL: Heap on socket 0 was expanded by 258MB 00:04:50.864 EAL: Calling mem event callback 'spdk:(nil)' 00:04:50.864 EAL: request: mp_malloc_sync 00:04:50.864 EAL: No shared files mode enabled, IPC is disabled 00:04:50.864 EAL: Heap on socket 0 was shrunk by 258MB 00:04:50.864 EAL: Trying to obtain current memory policy. 00:04:50.864 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:51.123 EAL: Restoring previous memory policy: 4 00:04:51.123 EAL: Calling mem event callback 'spdk:(nil)' 00:04:51.123 EAL: request: mp_malloc_sync 00:04:51.123 EAL: No shared files mode enabled, IPC is disabled 00:04:51.123 EAL: Heap on socket 0 was expanded by 514MB 00:04:51.123 EAL: Calling mem event callback 'spdk:(nil)' 00:04:51.382 EAL: request: mp_malloc_sync 00:04:51.382 EAL: No shared files mode enabled, IPC is disabled 00:04:51.382 EAL: Heap on socket 0 was shrunk by 514MB 00:04:51.382 EAL: Trying to obtain current memory policy. 00:04:51.382 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:51.382 EAL: Restoring previous memory policy: 4 00:04:51.382 EAL: Calling mem event callback 'spdk:(nil)' 00:04:51.382 EAL: request: mp_malloc_sync 00:04:51.382 EAL: No shared files mode enabled, IPC is disabled 00:04:51.382 EAL: Heap on socket 0 was expanded by 1026MB 00:04:51.642 EAL: Calling mem event callback 'spdk:(nil)' 00:04:51.902 passed 00:04:51.902 00:04:51.902 Run Summary: Type Total Ran Passed Failed Inactive 00:04:51.902 suites 1 1 n/a 0 0 00:04:51.902 tests 2 2 2 0 0 00:04:51.902 asserts 5274 5274 5274 0 n/a 00:04:51.902 00:04:51.902 Elapsed time = 1.417 seconds 00:04:51.902 EAL: request: mp_malloc_sync 00:04:51.902 EAL: No shared files mode enabled, IPC is disabled 00:04:51.902 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:51.902 EAL: Calling mem event callback 'spdk:(nil)' 00:04:51.902 EAL: request: mp_malloc_sync 00:04:51.902 EAL: No shared files mode enabled, IPC is disabled 00:04:51.902 EAL: Heap on socket 0 was shrunk by 2MB 00:04:51.902 EAL: No shared files mode enabled, IPC is disabled 00:04:51.902 EAL: No shared files mode enabled, IPC is disabled 00:04:51.902 EAL: No shared files mode enabled, IPC is disabled 00:04:51.902 00:04:51.902 real 0m1.700s 00:04:51.902 user 0m0.792s 00:04:51.902 sys 0m0.772s 00:04:51.902 18:35:52 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:51.902 18:35:52 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:51.902 ************************************ 00:04:51.902 END TEST env_vtophys 00:04:51.902 ************************************ 00:04:51.902 18:35:52 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:51.902 18:35:52 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:51.902 18:35:52 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:51.902 18:35:52 env -- common/autotest_common.sh@10 -- # set +x 00:04:51.902 ************************************ 00:04:51.902 START TEST env_pci 00:04:51.902 ************************************ 00:04:51.902 18:35:52 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:51.902 00:04:51.902 00:04:51.902 CUnit - A unit testing framework for C - Version 2.1-3 00:04:51.902 http://cunit.sourceforge.net/ 00:04:51.902 00:04:51.902 00:04:51.902 Suite: pci 00:04:51.902 Test: pci_hook ...[2024-12-15 18:35:52.235425] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 70846 has claimed it 00:04:51.902 passed 00:04:51.902 00:04:51.902 Run Summary: Type Total Ran Passed Failed Inactive 00:04:51.902 suites 1 1 n/a 0 0 00:04:51.902 tests 1 1 1 0 0 00:04:51.902 asserts 25 25 25 0 n/a 00:04:51.902 00:04:51.902 Elapsed time = 0.006 seconds 00:04:51.902 EAL: Cannot find device (10000:00:01.0) 00:04:51.902 EAL: Failed to attach device on primary process 00:04:51.902 00:04:51.902 real 0m0.093s 00:04:51.902 user 0m0.038s 00:04:51.902 sys 0m0.054s 00:04:51.902 18:35:52 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:51.902 18:35:52 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:51.902 ************************************ 00:04:51.902 END TEST env_pci 00:04:51.902 ************************************ 00:04:52.162 18:35:52 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:52.162 18:35:52 env -- env/env.sh@15 -- # uname 00:04:52.162 18:35:52 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:52.162 18:35:52 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:52.162 18:35:52 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:52.162 18:35:52 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:04:52.162 18:35:52 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:52.162 18:35:52 env -- common/autotest_common.sh@10 -- # set +x 00:04:52.162 ************************************ 00:04:52.162 START TEST env_dpdk_post_init 00:04:52.162 ************************************ 00:04:52.162 18:35:52 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:52.162 EAL: Detected CPU lcores: 10 00:04:52.162 EAL: Detected NUMA nodes: 1 00:04:52.162 EAL: Detected shared linkage of DPDK 00:04:52.162 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:52.162 EAL: Selected IOVA mode 'PA' 00:04:52.162 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:52.162 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:52.162 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:52.423 Starting DPDK initialization... 00:04:52.423 Starting SPDK post initialization... 00:04:52.423 SPDK NVMe probe 00:04:52.423 Attaching to 0000:00:10.0 00:04:52.423 Attaching to 0000:00:11.0 00:04:52.423 Attached to 0000:00:10.0 00:04:52.423 Attached to 0000:00:11.0 00:04:52.423 Cleaning up... 00:04:52.423 00:04:52.423 real 0m0.264s 00:04:52.423 user 0m0.081s 00:04:52.423 sys 0m0.083s 00:04:52.423 18:35:52 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:52.423 18:35:52 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:52.423 ************************************ 00:04:52.423 END TEST env_dpdk_post_init 00:04:52.423 ************************************ 00:04:52.423 18:35:52 env -- env/env.sh@26 -- # uname 00:04:52.423 18:35:52 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:52.423 18:35:52 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:52.423 18:35:52 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:52.423 18:35:52 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:52.423 18:35:52 env -- common/autotest_common.sh@10 -- # set +x 00:04:52.423 ************************************ 00:04:52.423 START TEST env_mem_callbacks 00:04:52.423 ************************************ 00:04:52.423 18:35:52 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:52.423 EAL: Detected CPU lcores: 10 00:04:52.423 EAL: Detected NUMA nodes: 1 00:04:52.423 EAL: Detected shared linkage of DPDK 00:04:52.423 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:52.423 EAL: Selected IOVA mode 'PA' 00:04:52.683 00:04:52.683 00:04:52.683 CUnit - A unit testing framework for C - Version 2.1-3 00:04:52.683 http://cunit.sourceforge.net/ 00:04:52.683 00:04:52.683 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:52.683 00:04:52.683 Suite: memory 00:04:52.683 Test: test ... 00:04:52.683 register 0x200000200000 2097152 00:04:52.683 malloc 3145728 00:04:52.683 register 0x200000400000 4194304 00:04:52.683 buf 0x200000500000 len 3145728 PASSED 00:04:52.683 malloc 64 00:04:52.683 buf 0x2000004fff40 len 64 PASSED 00:04:52.683 malloc 4194304 00:04:52.683 register 0x200000800000 6291456 00:04:52.683 buf 0x200000a00000 len 4194304 PASSED 00:04:52.683 free 0x200000500000 3145728 00:04:52.683 free 0x2000004fff40 64 00:04:52.683 unregister 0x200000400000 4194304 PASSED 00:04:52.683 free 0x200000a00000 4194304 00:04:52.683 unregister 0x200000800000 6291456 PASSED 00:04:52.683 malloc 8388608 00:04:52.683 register 0x200000400000 10485760 00:04:52.683 buf 0x200000600000 len 8388608 PASSED 00:04:52.683 free 0x200000600000 8388608 00:04:52.683 unregister 0x200000400000 10485760 PASSED 00:04:52.683 passed 00:04:52.683 00:04:52.683 Run Summary: Type Total Ran Passed Failed Inactive 00:04:52.683 suites 1 1 n/a 0 0 00:04:52.683 tests 1 1 1 0 0 00:04:52.683 asserts 15 15 15 0 n/a 00:04:52.683 00:04:52.683 Elapsed time = 0.012 seconds 00:04:52.683 00:04:52.683 real 0m0.205s 00:04:52.683 user 0m0.037s 00:04:52.683 sys 0m0.066s 00:04:52.683 18:35:52 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:52.683 18:35:52 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:52.683 ************************************ 00:04:52.683 END TEST env_mem_callbacks 00:04:52.683 ************************************ 00:04:52.684 00:04:52.684 real 0m3.161s 00:04:52.684 user 0m1.475s 00:04:52.684 sys 0m1.353s 00:04:52.684 18:35:52 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:52.684 18:35:52 env -- common/autotest_common.sh@10 -- # set +x 00:04:52.684 ************************************ 00:04:52.684 END TEST env 00:04:52.684 ************************************ 00:04:52.684 18:35:53 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:52.684 18:35:53 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:52.684 18:35:53 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:52.684 18:35:53 -- common/autotest_common.sh@10 -- # set +x 00:04:52.684 ************************************ 00:04:52.684 START TEST rpc 00:04:52.684 ************************************ 00:04:52.684 18:35:53 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:52.944 * Looking for test storage... 00:04:52.944 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:52.944 18:35:53 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:52.944 18:35:53 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:52.944 18:35:53 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:52.944 18:35:53 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:52.944 18:35:53 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:52.944 18:35:53 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:52.944 18:35:53 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:52.944 18:35:53 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:52.944 18:35:53 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:52.944 18:35:53 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:52.944 18:35:53 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:52.944 18:35:53 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:52.944 18:35:53 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:52.944 18:35:53 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:52.944 18:35:53 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:52.944 18:35:53 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:52.944 18:35:53 rpc -- scripts/common.sh@345 -- # : 1 00:04:52.944 18:35:53 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:52.944 18:35:53 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:52.944 18:35:53 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:52.944 18:35:53 rpc -- scripts/common.sh@353 -- # local d=1 00:04:52.944 18:35:53 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:52.944 18:35:53 rpc -- scripts/common.sh@355 -- # echo 1 00:04:52.944 18:35:53 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:52.944 18:35:53 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:52.944 18:35:53 rpc -- scripts/common.sh@353 -- # local d=2 00:04:52.944 18:35:53 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:52.944 18:35:53 rpc -- scripts/common.sh@355 -- # echo 2 00:04:52.944 18:35:53 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:52.944 18:35:53 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:52.944 18:35:53 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:52.944 18:35:53 rpc -- scripts/common.sh@368 -- # return 0 00:04:52.944 18:35:53 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:52.944 18:35:53 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:52.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.944 --rc genhtml_branch_coverage=1 00:04:52.944 --rc genhtml_function_coverage=1 00:04:52.944 --rc genhtml_legend=1 00:04:52.944 --rc geninfo_all_blocks=1 00:04:52.944 --rc geninfo_unexecuted_blocks=1 00:04:52.944 00:04:52.944 ' 00:04:52.944 18:35:53 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:52.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.944 --rc genhtml_branch_coverage=1 00:04:52.944 --rc genhtml_function_coverage=1 00:04:52.944 --rc genhtml_legend=1 00:04:52.944 --rc geninfo_all_blocks=1 00:04:52.944 --rc geninfo_unexecuted_blocks=1 00:04:52.944 00:04:52.944 ' 00:04:52.944 18:35:53 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:52.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.944 --rc genhtml_branch_coverage=1 00:04:52.944 --rc genhtml_function_coverage=1 00:04:52.944 --rc genhtml_legend=1 00:04:52.944 --rc geninfo_all_blocks=1 00:04:52.944 --rc geninfo_unexecuted_blocks=1 00:04:52.944 00:04:52.944 ' 00:04:52.944 18:35:53 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:52.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.944 --rc genhtml_branch_coverage=1 00:04:52.944 --rc genhtml_function_coverage=1 00:04:52.944 --rc genhtml_legend=1 00:04:52.944 --rc geninfo_all_blocks=1 00:04:52.944 --rc geninfo_unexecuted_blocks=1 00:04:52.944 00:04:52.944 ' 00:04:52.944 18:35:53 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:52.944 18:35:53 rpc -- rpc/rpc.sh@65 -- # spdk_pid=70973 00:04:52.944 18:35:53 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:52.944 18:35:53 rpc -- rpc/rpc.sh@67 -- # waitforlisten 70973 00:04:52.944 18:35:53 rpc -- common/autotest_common.sh@835 -- # '[' -z 70973 ']' 00:04:52.944 18:35:53 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:52.944 18:35:53 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:52.944 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:52.944 18:35:53 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:52.944 18:35:53 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:52.944 18:35:53 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:52.944 [2024-12-15 18:35:53.366134] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:04:52.944 [2024-12-15 18:35:53.366282] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70973 ] 00:04:53.203 [2024-12-15 18:35:53.539895] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:53.203 [2024-12-15 18:35:53.568356] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:53.203 [2024-12-15 18:35:53.568417] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 70973' to capture a snapshot of events at runtime. 00:04:53.203 [2024-12-15 18:35:53.568432] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:53.203 [2024-12-15 18:35:53.568441] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:53.203 [2024-12-15 18:35:53.568450] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid70973 for offline analysis/debug. 00:04:53.203 [2024-12-15 18:35:53.568848] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:53.802 18:35:54 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:53.802 18:35:54 rpc -- common/autotest_common.sh@868 -- # return 0 00:04:53.802 18:35:54 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:53.802 18:35:54 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:53.802 18:35:54 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:53.802 18:35:54 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:53.802 18:35:54 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:53.802 18:35:54 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:53.802 18:35:54 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:53.802 ************************************ 00:04:53.802 START TEST rpc_integrity 00:04:53.802 ************************************ 00:04:53.802 18:35:54 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:53.802 18:35:54 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:53.802 18:35:54 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:53.802 18:35:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:53.802 18:35:54 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:53.802 18:35:54 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:53.802 18:35:54 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:54.062 18:35:54 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:54.062 18:35:54 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:54.062 18:35:54 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:54.062 18:35:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:54.062 18:35:54 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:54.062 18:35:54 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:54.062 18:35:54 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:54.062 18:35:54 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:54.062 18:35:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:54.062 18:35:54 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:54.062 18:35:54 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:54.062 { 00:04:54.062 "name": "Malloc0", 00:04:54.062 "aliases": [ 00:04:54.062 "1ac5dcdf-f563-43a8-b4c7-ba9f46db2a50" 00:04:54.062 ], 00:04:54.062 "product_name": "Malloc disk", 00:04:54.062 "block_size": 512, 00:04:54.062 "num_blocks": 16384, 00:04:54.062 "uuid": "1ac5dcdf-f563-43a8-b4c7-ba9f46db2a50", 00:04:54.062 "assigned_rate_limits": { 00:04:54.062 "rw_ios_per_sec": 0, 00:04:54.062 "rw_mbytes_per_sec": 0, 00:04:54.062 "r_mbytes_per_sec": 0, 00:04:54.062 "w_mbytes_per_sec": 0 00:04:54.062 }, 00:04:54.062 "claimed": false, 00:04:54.062 "zoned": false, 00:04:54.062 "supported_io_types": { 00:04:54.062 "read": true, 00:04:54.062 "write": true, 00:04:54.062 "unmap": true, 00:04:54.062 "flush": true, 00:04:54.062 "reset": true, 00:04:54.062 "nvme_admin": false, 00:04:54.062 "nvme_io": false, 00:04:54.062 "nvme_io_md": false, 00:04:54.062 "write_zeroes": true, 00:04:54.062 "zcopy": true, 00:04:54.062 "get_zone_info": false, 00:04:54.062 "zone_management": false, 00:04:54.062 "zone_append": false, 00:04:54.062 "compare": false, 00:04:54.062 "compare_and_write": false, 00:04:54.062 "abort": true, 00:04:54.062 "seek_hole": false, 00:04:54.062 "seek_data": false, 00:04:54.062 "copy": true, 00:04:54.062 "nvme_iov_md": false 00:04:54.062 }, 00:04:54.062 "memory_domains": [ 00:04:54.062 { 00:04:54.062 "dma_device_id": "system", 00:04:54.062 "dma_device_type": 1 00:04:54.062 }, 00:04:54.062 { 00:04:54.062 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:54.062 "dma_device_type": 2 00:04:54.062 } 00:04:54.062 ], 00:04:54.062 "driver_specific": {} 00:04:54.062 } 00:04:54.062 ]' 00:04:54.062 18:35:54 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:54.062 18:35:54 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:54.062 18:35:54 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:54.062 18:35:54 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:54.062 18:35:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:54.062 [2024-12-15 18:35:54.337331] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:54.062 [2024-12-15 18:35:54.337411] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:54.062 [2024-12-15 18:35:54.337468] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:04:54.062 [2024-12-15 18:35:54.337482] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:54.062 [2024-12-15 18:35:54.339873] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:54.062 [2024-12-15 18:35:54.339912] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:54.062 Passthru0 00:04:54.062 18:35:54 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:54.062 18:35:54 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:54.062 18:35:54 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:54.062 18:35:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:54.062 18:35:54 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:54.062 18:35:54 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:54.062 { 00:04:54.062 "name": "Malloc0", 00:04:54.062 "aliases": [ 00:04:54.062 "1ac5dcdf-f563-43a8-b4c7-ba9f46db2a50" 00:04:54.062 ], 00:04:54.062 "product_name": "Malloc disk", 00:04:54.062 "block_size": 512, 00:04:54.062 "num_blocks": 16384, 00:04:54.062 "uuid": "1ac5dcdf-f563-43a8-b4c7-ba9f46db2a50", 00:04:54.062 "assigned_rate_limits": { 00:04:54.062 "rw_ios_per_sec": 0, 00:04:54.062 "rw_mbytes_per_sec": 0, 00:04:54.062 "r_mbytes_per_sec": 0, 00:04:54.062 "w_mbytes_per_sec": 0 00:04:54.062 }, 00:04:54.062 "claimed": true, 00:04:54.062 "claim_type": "exclusive_write", 00:04:54.062 "zoned": false, 00:04:54.062 "supported_io_types": { 00:04:54.062 "read": true, 00:04:54.062 "write": true, 00:04:54.062 "unmap": true, 00:04:54.062 "flush": true, 00:04:54.062 "reset": true, 00:04:54.062 "nvme_admin": false, 00:04:54.062 "nvme_io": false, 00:04:54.062 "nvme_io_md": false, 00:04:54.062 "write_zeroes": true, 00:04:54.062 "zcopy": true, 00:04:54.062 "get_zone_info": false, 00:04:54.062 "zone_management": false, 00:04:54.062 "zone_append": false, 00:04:54.062 "compare": false, 00:04:54.062 "compare_and_write": false, 00:04:54.062 "abort": true, 00:04:54.062 "seek_hole": false, 00:04:54.062 "seek_data": false, 00:04:54.062 "copy": true, 00:04:54.062 "nvme_iov_md": false 00:04:54.062 }, 00:04:54.062 "memory_domains": [ 00:04:54.062 { 00:04:54.062 "dma_device_id": "system", 00:04:54.062 "dma_device_type": 1 00:04:54.062 }, 00:04:54.062 { 00:04:54.062 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:54.062 "dma_device_type": 2 00:04:54.062 } 00:04:54.062 ], 00:04:54.062 "driver_specific": {} 00:04:54.062 }, 00:04:54.062 { 00:04:54.062 "name": "Passthru0", 00:04:54.062 "aliases": [ 00:04:54.062 "5821ebb4-897f-58be-866b-12c02bd0f0ba" 00:04:54.062 ], 00:04:54.062 "product_name": "passthru", 00:04:54.062 "block_size": 512, 00:04:54.062 "num_blocks": 16384, 00:04:54.062 "uuid": "5821ebb4-897f-58be-866b-12c02bd0f0ba", 00:04:54.062 "assigned_rate_limits": { 00:04:54.062 "rw_ios_per_sec": 0, 00:04:54.062 "rw_mbytes_per_sec": 0, 00:04:54.062 "r_mbytes_per_sec": 0, 00:04:54.062 "w_mbytes_per_sec": 0 00:04:54.062 }, 00:04:54.062 "claimed": false, 00:04:54.062 "zoned": false, 00:04:54.062 "supported_io_types": { 00:04:54.062 "read": true, 00:04:54.062 "write": true, 00:04:54.062 "unmap": true, 00:04:54.062 "flush": true, 00:04:54.062 "reset": true, 00:04:54.062 "nvme_admin": false, 00:04:54.062 "nvme_io": false, 00:04:54.062 "nvme_io_md": false, 00:04:54.062 "write_zeroes": true, 00:04:54.062 "zcopy": true, 00:04:54.062 "get_zone_info": false, 00:04:54.062 "zone_management": false, 00:04:54.062 "zone_append": false, 00:04:54.062 "compare": false, 00:04:54.062 "compare_and_write": false, 00:04:54.062 "abort": true, 00:04:54.062 "seek_hole": false, 00:04:54.062 "seek_data": false, 00:04:54.062 "copy": true, 00:04:54.062 "nvme_iov_md": false 00:04:54.062 }, 00:04:54.062 "memory_domains": [ 00:04:54.062 { 00:04:54.062 "dma_device_id": "system", 00:04:54.062 "dma_device_type": 1 00:04:54.062 }, 00:04:54.062 { 00:04:54.062 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:54.062 "dma_device_type": 2 00:04:54.062 } 00:04:54.062 ], 00:04:54.062 "driver_specific": { 00:04:54.062 "passthru": { 00:04:54.062 "name": "Passthru0", 00:04:54.062 "base_bdev_name": "Malloc0" 00:04:54.062 } 00:04:54.062 } 00:04:54.062 } 00:04:54.062 ]' 00:04:54.062 18:35:54 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:54.062 18:35:54 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:54.062 18:35:54 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:54.062 18:35:54 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:54.062 18:35:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:54.062 18:35:54 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:54.062 18:35:54 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:54.062 18:35:54 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:54.062 18:35:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:54.062 18:35:54 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:54.062 18:35:54 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:54.062 18:35:54 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:54.062 18:35:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:54.062 18:35:54 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:54.062 18:35:54 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:54.063 18:35:54 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:54.321 18:35:54 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:54.321 00:04:54.321 real 0m0.311s 00:04:54.321 user 0m0.185s 00:04:54.321 sys 0m0.053s 00:04:54.321 18:35:54 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:54.321 18:35:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:54.321 ************************************ 00:04:54.321 END TEST rpc_integrity 00:04:54.321 ************************************ 00:04:54.321 18:35:54 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:54.321 18:35:54 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:54.321 18:35:54 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:54.321 18:35:54 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:54.321 ************************************ 00:04:54.321 START TEST rpc_plugins 00:04:54.321 ************************************ 00:04:54.321 18:35:54 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:04:54.321 18:35:54 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:54.321 18:35:54 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:54.321 18:35:54 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:54.321 18:35:54 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:54.321 18:35:54 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:54.321 18:35:54 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:54.321 18:35:54 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:54.322 18:35:54 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:54.322 18:35:54 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:54.322 18:35:54 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:54.322 { 00:04:54.322 "name": "Malloc1", 00:04:54.322 "aliases": [ 00:04:54.322 "7fa44301-a5ef-4dd9-bfb1-970a1425bbe0" 00:04:54.322 ], 00:04:54.322 "product_name": "Malloc disk", 00:04:54.322 "block_size": 4096, 00:04:54.322 "num_blocks": 256, 00:04:54.322 "uuid": "7fa44301-a5ef-4dd9-bfb1-970a1425bbe0", 00:04:54.322 "assigned_rate_limits": { 00:04:54.322 "rw_ios_per_sec": 0, 00:04:54.322 "rw_mbytes_per_sec": 0, 00:04:54.322 "r_mbytes_per_sec": 0, 00:04:54.322 "w_mbytes_per_sec": 0 00:04:54.322 }, 00:04:54.322 "claimed": false, 00:04:54.322 "zoned": false, 00:04:54.322 "supported_io_types": { 00:04:54.322 "read": true, 00:04:54.322 "write": true, 00:04:54.322 "unmap": true, 00:04:54.322 "flush": true, 00:04:54.322 "reset": true, 00:04:54.322 "nvme_admin": false, 00:04:54.322 "nvme_io": false, 00:04:54.322 "nvme_io_md": false, 00:04:54.322 "write_zeroes": true, 00:04:54.322 "zcopy": true, 00:04:54.322 "get_zone_info": false, 00:04:54.322 "zone_management": false, 00:04:54.322 "zone_append": false, 00:04:54.322 "compare": false, 00:04:54.322 "compare_and_write": false, 00:04:54.322 "abort": true, 00:04:54.322 "seek_hole": false, 00:04:54.322 "seek_data": false, 00:04:54.322 "copy": true, 00:04:54.322 "nvme_iov_md": false 00:04:54.322 }, 00:04:54.322 "memory_domains": [ 00:04:54.322 { 00:04:54.322 "dma_device_id": "system", 00:04:54.322 "dma_device_type": 1 00:04:54.322 }, 00:04:54.322 { 00:04:54.322 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:54.322 "dma_device_type": 2 00:04:54.322 } 00:04:54.322 ], 00:04:54.322 "driver_specific": {} 00:04:54.322 } 00:04:54.322 ]' 00:04:54.322 18:35:54 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:54.322 18:35:54 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:54.322 18:35:54 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:54.322 18:35:54 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:54.322 18:35:54 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:54.322 18:35:54 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:54.322 18:35:54 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:54.322 18:35:54 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:54.322 18:35:54 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:54.322 18:35:54 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:54.322 18:35:54 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:54.322 18:35:54 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:54.322 18:35:54 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:54.322 00:04:54.322 real 0m0.163s 00:04:54.322 user 0m0.098s 00:04:54.322 sys 0m0.025s 00:04:54.322 18:35:54 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:54.322 18:35:54 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:54.322 ************************************ 00:04:54.322 END TEST rpc_plugins 00:04:54.322 ************************************ 00:04:54.581 18:35:54 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:54.581 18:35:54 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:54.581 18:35:54 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:54.581 18:35:54 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:54.581 ************************************ 00:04:54.581 START TEST rpc_trace_cmd_test 00:04:54.581 ************************************ 00:04:54.581 18:35:54 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:04:54.581 18:35:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:54.581 18:35:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:54.581 18:35:54 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:54.581 18:35:54 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:54.581 18:35:54 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:54.581 18:35:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:54.581 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid70973", 00:04:54.581 "tpoint_group_mask": "0x8", 00:04:54.581 "iscsi_conn": { 00:04:54.581 "mask": "0x2", 00:04:54.581 "tpoint_mask": "0x0" 00:04:54.581 }, 00:04:54.581 "scsi": { 00:04:54.581 "mask": "0x4", 00:04:54.581 "tpoint_mask": "0x0" 00:04:54.581 }, 00:04:54.581 "bdev": { 00:04:54.581 "mask": "0x8", 00:04:54.581 "tpoint_mask": "0xffffffffffffffff" 00:04:54.581 }, 00:04:54.581 "nvmf_rdma": { 00:04:54.581 "mask": "0x10", 00:04:54.581 "tpoint_mask": "0x0" 00:04:54.581 }, 00:04:54.581 "nvmf_tcp": { 00:04:54.581 "mask": "0x20", 00:04:54.581 "tpoint_mask": "0x0" 00:04:54.581 }, 00:04:54.581 "ftl": { 00:04:54.581 "mask": "0x40", 00:04:54.581 "tpoint_mask": "0x0" 00:04:54.581 }, 00:04:54.581 "blobfs": { 00:04:54.581 "mask": "0x80", 00:04:54.581 "tpoint_mask": "0x0" 00:04:54.581 }, 00:04:54.581 "dsa": { 00:04:54.581 "mask": "0x200", 00:04:54.581 "tpoint_mask": "0x0" 00:04:54.581 }, 00:04:54.581 "thread": { 00:04:54.581 "mask": "0x400", 00:04:54.581 "tpoint_mask": "0x0" 00:04:54.581 }, 00:04:54.582 "nvme_pcie": { 00:04:54.582 "mask": "0x800", 00:04:54.582 "tpoint_mask": "0x0" 00:04:54.582 }, 00:04:54.582 "iaa": { 00:04:54.582 "mask": "0x1000", 00:04:54.582 "tpoint_mask": "0x0" 00:04:54.582 }, 00:04:54.582 "nvme_tcp": { 00:04:54.582 "mask": "0x2000", 00:04:54.582 "tpoint_mask": "0x0" 00:04:54.582 }, 00:04:54.582 "bdev_nvme": { 00:04:54.582 "mask": "0x4000", 00:04:54.582 "tpoint_mask": "0x0" 00:04:54.582 }, 00:04:54.582 "sock": { 00:04:54.582 "mask": "0x8000", 00:04:54.582 "tpoint_mask": "0x0" 00:04:54.582 }, 00:04:54.582 "blob": { 00:04:54.582 "mask": "0x10000", 00:04:54.582 "tpoint_mask": "0x0" 00:04:54.582 }, 00:04:54.582 "bdev_raid": { 00:04:54.582 "mask": "0x20000", 00:04:54.582 "tpoint_mask": "0x0" 00:04:54.582 }, 00:04:54.582 "scheduler": { 00:04:54.582 "mask": "0x40000", 00:04:54.582 "tpoint_mask": "0x0" 00:04:54.582 } 00:04:54.582 }' 00:04:54.582 18:35:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:54.582 18:35:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:54.582 18:35:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:54.582 18:35:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:54.582 18:35:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:54.582 18:35:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:54.582 18:35:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:54.582 18:35:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:54.582 18:35:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:54.582 18:35:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:54.582 00:04:54.582 real 0m0.227s 00:04:54.582 user 0m0.186s 00:04:54.582 sys 0m0.034s 00:04:54.582 18:35:55 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:54.582 18:35:55 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:54.582 ************************************ 00:04:54.582 END TEST rpc_trace_cmd_test 00:04:54.582 ************************************ 00:04:54.845 18:35:55 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:54.846 18:35:55 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:54.846 18:35:55 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:54.846 18:35:55 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:54.846 18:35:55 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:54.846 18:35:55 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:54.846 ************************************ 00:04:54.846 START TEST rpc_daemon_integrity 00:04:54.846 ************************************ 00:04:54.846 18:35:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:54.846 18:35:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:54.846 18:35:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:54.846 18:35:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:54.846 18:35:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:54.846 18:35:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:54.846 18:35:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:54.846 18:35:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:54.846 18:35:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:54.846 18:35:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:54.846 18:35:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:54.846 18:35:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:54.846 18:35:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:54.846 18:35:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:54.846 18:35:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:54.846 18:35:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:54.846 18:35:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:54.846 18:35:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:54.846 { 00:04:54.846 "name": "Malloc2", 00:04:54.846 "aliases": [ 00:04:54.846 "928f88de-5ca5-4d1b-82e9-63976b0be301" 00:04:54.846 ], 00:04:54.846 "product_name": "Malloc disk", 00:04:54.846 "block_size": 512, 00:04:54.846 "num_blocks": 16384, 00:04:54.846 "uuid": "928f88de-5ca5-4d1b-82e9-63976b0be301", 00:04:54.846 "assigned_rate_limits": { 00:04:54.846 "rw_ios_per_sec": 0, 00:04:54.846 "rw_mbytes_per_sec": 0, 00:04:54.846 "r_mbytes_per_sec": 0, 00:04:54.846 "w_mbytes_per_sec": 0 00:04:54.846 }, 00:04:54.846 "claimed": false, 00:04:54.846 "zoned": false, 00:04:54.846 "supported_io_types": { 00:04:54.846 "read": true, 00:04:54.846 "write": true, 00:04:54.846 "unmap": true, 00:04:54.846 "flush": true, 00:04:54.846 "reset": true, 00:04:54.846 "nvme_admin": false, 00:04:54.846 "nvme_io": false, 00:04:54.846 "nvme_io_md": false, 00:04:54.846 "write_zeroes": true, 00:04:54.846 "zcopy": true, 00:04:54.846 "get_zone_info": false, 00:04:54.846 "zone_management": false, 00:04:54.846 "zone_append": false, 00:04:54.846 "compare": false, 00:04:54.846 "compare_and_write": false, 00:04:54.846 "abort": true, 00:04:54.846 "seek_hole": false, 00:04:54.846 "seek_data": false, 00:04:54.846 "copy": true, 00:04:54.846 "nvme_iov_md": false 00:04:54.846 }, 00:04:54.846 "memory_domains": [ 00:04:54.846 { 00:04:54.846 "dma_device_id": "system", 00:04:54.846 "dma_device_type": 1 00:04:54.846 }, 00:04:54.846 { 00:04:54.846 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:54.846 "dma_device_type": 2 00:04:54.846 } 00:04:54.846 ], 00:04:54.846 "driver_specific": {} 00:04:54.846 } 00:04:54.846 ]' 00:04:54.846 18:35:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:54.846 18:35:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:54.846 18:35:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:54.846 18:35:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:54.846 18:35:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:54.846 [2024-12-15 18:35:55.212714] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:54.846 [2024-12-15 18:35:55.212795] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:54.846 [2024-12-15 18:35:55.212835] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:04:54.846 [2024-12-15 18:35:55.212845] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:54.846 [2024-12-15 18:35:55.215436] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:54.846 [2024-12-15 18:35:55.215483] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:54.846 Passthru0 00:04:54.846 18:35:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:54.846 18:35:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:54.846 18:35:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:54.846 18:35:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:54.846 18:35:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:54.846 18:35:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:54.846 { 00:04:54.846 "name": "Malloc2", 00:04:54.846 "aliases": [ 00:04:54.846 "928f88de-5ca5-4d1b-82e9-63976b0be301" 00:04:54.846 ], 00:04:54.846 "product_name": "Malloc disk", 00:04:54.846 "block_size": 512, 00:04:54.846 "num_blocks": 16384, 00:04:54.846 "uuid": "928f88de-5ca5-4d1b-82e9-63976b0be301", 00:04:54.846 "assigned_rate_limits": { 00:04:54.846 "rw_ios_per_sec": 0, 00:04:54.846 "rw_mbytes_per_sec": 0, 00:04:54.846 "r_mbytes_per_sec": 0, 00:04:54.846 "w_mbytes_per_sec": 0 00:04:54.846 }, 00:04:54.846 "claimed": true, 00:04:54.846 "claim_type": "exclusive_write", 00:04:54.846 "zoned": false, 00:04:54.846 "supported_io_types": { 00:04:54.846 "read": true, 00:04:54.846 "write": true, 00:04:54.846 "unmap": true, 00:04:54.846 "flush": true, 00:04:54.846 "reset": true, 00:04:54.846 "nvme_admin": false, 00:04:54.846 "nvme_io": false, 00:04:54.846 "nvme_io_md": false, 00:04:54.846 "write_zeroes": true, 00:04:54.846 "zcopy": true, 00:04:54.846 "get_zone_info": false, 00:04:54.846 "zone_management": false, 00:04:54.846 "zone_append": false, 00:04:54.847 "compare": false, 00:04:54.847 "compare_and_write": false, 00:04:54.847 "abort": true, 00:04:54.847 "seek_hole": false, 00:04:54.847 "seek_data": false, 00:04:54.847 "copy": true, 00:04:54.847 "nvme_iov_md": false 00:04:54.847 }, 00:04:54.847 "memory_domains": [ 00:04:54.847 { 00:04:54.847 "dma_device_id": "system", 00:04:54.847 "dma_device_type": 1 00:04:54.847 }, 00:04:54.847 { 00:04:54.847 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:54.847 "dma_device_type": 2 00:04:54.847 } 00:04:54.847 ], 00:04:54.847 "driver_specific": {} 00:04:54.847 }, 00:04:54.847 { 00:04:54.847 "name": "Passthru0", 00:04:54.847 "aliases": [ 00:04:54.847 "92b593f2-8c7f-5875-a842-bccce07f8b6b" 00:04:54.847 ], 00:04:54.847 "product_name": "passthru", 00:04:54.847 "block_size": 512, 00:04:54.847 "num_blocks": 16384, 00:04:54.847 "uuid": "92b593f2-8c7f-5875-a842-bccce07f8b6b", 00:04:54.847 "assigned_rate_limits": { 00:04:54.847 "rw_ios_per_sec": 0, 00:04:54.847 "rw_mbytes_per_sec": 0, 00:04:54.847 "r_mbytes_per_sec": 0, 00:04:54.847 "w_mbytes_per_sec": 0 00:04:54.847 }, 00:04:54.847 "claimed": false, 00:04:54.847 "zoned": false, 00:04:54.847 "supported_io_types": { 00:04:54.847 "read": true, 00:04:54.847 "write": true, 00:04:54.847 "unmap": true, 00:04:54.847 "flush": true, 00:04:54.847 "reset": true, 00:04:54.847 "nvme_admin": false, 00:04:54.847 "nvme_io": false, 00:04:54.847 "nvme_io_md": false, 00:04:54.847 "write_zeroes": true, 00:04:54.847 "zcopy": true, 00:04:54.847 "get_zone_info": false, 00:04:54.847 "zone_management": false, 00:04:54.847 "zone_append": false, 00:04:54.847 "compare": false, 00:04:54.847 "compare_and_write": false, 00:04:54.847 "abort": true, 00:04:54.847 "seek_hole": false, 00:04:54.847 "seek_data": false, 00:04:54.847 "copy": true, 00:04:54.847 "nvme_iov_md": false 00:04:54.847 }, 00:04:54.847 "memory_domains": [ 00:04:54.847 { 00:04:54.847 "dma_device_id": "system", 00:04:54.847 "dma_device_type": 1 00:04:54.847 }, 00:04:54.847 { 00:04:54.847 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:54.847 "dma_device_type": 2 00:04:54.847 } 00:04:54.847 ], 00:04:54.847 "driver_specific": { 00:04:54.847 "passthru": { 00:04:54.847 "name": "Passthru0", 00:04:54.847 "base_bdev_name": "Malloc2" 00:04:54.847 } 00:04:54.847 } 00:04:54.847 } 00:04:54.847 ]' 00:04:54.847 18:35:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:55.108 18:35:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:55.108 18:35:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:55.108 18:35:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:55.108 18:35:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:55.108 18:35:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:55.108 18:35:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:55.108 18:35:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:55.108 18:35:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:55.108 18:35:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:55.108 18:35:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:55.108 18:35:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:55.108 18:35:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:55.108 18:35:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:55.108 18:35:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:55.108 18:35:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:55.108 18:35:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:55.108 00:04:55.108 real 0m0.306s 00:04:55.108 user 0m0.188s 00:04:55.108 sys 0m0.049s 00:04:55.108 18:35:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:55.108 18:35:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:55.108 ************************************ 00:04:55.108 END TEST rpc_daemon_integrity 00:04:55.108 ************************************ 00:04:55.108 18:35:55 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:55.108 18:35:55 rpc -- rpc/rpc.sh@84 -- # killprocess 70973 00:04:55.108 18:35:55 rpc -- common/autotest_common.sh@954 -- # '[' -z 70973 ']' 00:04:55.108 18:35:55 rpc -- common/autotest_common.sh@958 -- # kill -0 70973 00:04:55.108 18:35:55 rpc -- common/autotest_common.sh@959 -- # uname 00:04:55.108 18:35:55 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:55.108 18:35:55 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70973 00:04:55.108 18:35:55 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:55.108 18:35:55 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:55.108 killing process with pid 70973 00:04:55.108 18:35:55 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70973' 00:04:55.108 18:35:55 rpc -- common/autotest_common.sh@973 -- # kill 70973 00:04:55.108 18:35:55 rpc -- common/autotest_common.sh@978 -- # wait 70973 00:04:55.675 00:04:55.675 real 0m2.815s 00:04:55.675 user 0m3.341s 00:04:55.675 sys 0m0.868s 00:04:55.675 18:35:55 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:55.675 18:35:55 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:55.675 ************************************ 00:04:55.675 END TEST rpc 00:04:55.675 ************************************ 00:04:55.675 18:35:55 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:55.675 18:35:55 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:55.675 18:35:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:55.675 18:35:55 -- common/autotest_common.sh@10 -- # set +x 00:04:55.675 ************************************ 00:04:55.675 START TEST skip_rpc 00:04:55.675 ************************************ 00:04:55.675 18:35:55 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:55.675 * Looking for test storage... 00:04:55.675 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:55.675 18:35:56 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:55.675 18:35:56 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:55.675 18:35:56 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:55.675 18:35:56 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:55.675 18:35:56 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:55.675 18:35:56 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:55.675 18:35:56 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:55.675 18:35:56 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:55.675 18:35:56 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:55.675 18:35:56 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:55.675 18:35:56 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:55.675 18:35:56 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:55.675 18:35:56 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:55.675 18:35:56 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:55.675 18:35:56 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:55.675 18:35:56 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:55.675 18:35:56 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:55.675 18:35:56 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:55.675 18:35:56 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:55.675 18:35:56 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:55.675 18:35:56 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:55.675 18:35:56 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:55.675 18:35:56 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:55.675 18:35:56 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:55.675 18:35:56 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:55.675 18:35:56 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:55.675 18:35:56 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:55.675 18:35:56 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:55.934 18:35:56 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:55.934 18:35:56 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:55.934 18:35:56 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:55.934 18:35:56 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:55.934 18:35:56 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:55.934 18:35:56 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:55.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.934 --rc genhtml_branch_coverage=1 00:04:55.934 --rc genhtml_function_coverage=1 00:04:55.934 --rc genhtml_legend=1 00:04:55.934 --rc geninfo_all_blocks=1 00:04:55.934 --rc geninfo_unexecuted_blocks=1 00:04:55.934 00:04:55.934 ' 00:04:55.934 18:35:56 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:55.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.934 --rc genhtml_branch_coverage=1 00:04:55.934 --rc genhtml_function_coverage=1 00:04:55.934 --rc genhtml_legend=1 00:04:55.934 --rc geninfo_all_blocks=1 00:04:55.934 --rc geninfo_unexecuted_blocks=1 00:04:55.934 00:04:55.934 ' 00:04:55.934 18:35:56 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:55.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.934 --rc genhtml_branch_coverage=1 00:04:55.934 --rc genhtml_function_coverage=1 00:04:55.934 --rc genhtml_legend=1 00:04:55.934 --rc geninfo_all_blocks=1 00:04:55.934 --rc geninfo_unexecuted_blocks=1 00:04:55.934 00:04:55.934 ' 00:04:55.934 18:35:56 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:55.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.934 --rc genhtml_branch_coverage=1 00:04:55.934 --rc genhtml_function_coverage=1 00:04:55.934 --rc genhtml_legend=1 00:04:55.934 --rc geninfo_all_blocks=1 00:04:55.934 --rc geninfo_unexecuted_blocks=1 00:04:55.934 00:04:55.934 ' 00:04:55.934 18:35:56 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:55.934 18:35:56 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:55.934 18:35:56 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:55.934 18:35:56 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:55.934 18:35:56 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:55.934 18:35:56 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:55.934 ************************************ 00:04:55.934 START TEST skip_rpc 00:04:55.934 ************************************ 00:04:55.934 18:35:56 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:04:55.934 18:35:56 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=71175 00:04:55.934 18:35:56 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:55.934 18:35:56 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:55.934 18:35:56 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:55.934 [2024-12-15 18:35:56.228942] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:04:55.934 [2024-12-15 18:35:56.229168] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71175 ] 00:04:56.192 [2024-12-15 18:35:56.402561] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:56.193 [2024-12-15 18:35:56.432731] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:01.466 18:36:01 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:01.466 18:36:01 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:01.466 18:36:01 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:01.466 18:36:01 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:01.466 18:36:01 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:01.466 18:36:01 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:01.466 18:36:01 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:01.466 18:36:01 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:05:01.466 18:36:01 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:01.466 18:36:01 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:01.466 18:36:01 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:01.466 18:36:01 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:01.466 18:36:01 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:01.466 18:36:01 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:01.466 18:36:01 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:01.466 18:36:01 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:01.466 18:36:01 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 71175 00:05:01.466 18:36:01 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 71175 ']' 00:05:01.466 18:36:01 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 71175 00:05:01.466 18:36:01 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:05:01.467 18:36:01 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:01.467 18:36:01 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71175 00:05:01.467 18:36:01 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:01.467 18:36:01 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:01.467 18:36:01 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71175' 00:05:01.467 killing process with pid 71175 00:05:01.467 18:36:01 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 71175 00:05:01.467 18:36:01 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 71175 00:05:01.467 00:05:01.467 real 0m5.440s 00:05:01.467 user 0m5.029s 00:05:01.467 sys 0m0.337s 00:05:01.467 18:36:01 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:01.467 18:36:01 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:01.467 ************************************ 00:05:01.467 END TEST skip_rpc 00:05:01.467 ************************************ 00:05:01.467 18:36:01 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:01.467 18:36:01 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:01.467 18:36:01 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:01.467 18:36:01 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:01.467 ************************************ 00:05:01.467 START TEST skip_rpc_with_json 00:05:01.467 ************************************ 00:05:01.467 18:36:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:05:01.467 18:36:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:01.467 18:36:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=71262 00:05:01.467 18:36:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:01.467 18:36:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:01.467 18:36:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 71262 00:05:01.467 18:36:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 71262 ']' 00:05:01.467 18:36:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:01.467 18:36:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:01.467 18:36:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:01.467 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:01.467 18:36:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:01.467 18:36:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:01.467 [2024-12-15 18:36:01.737369] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:05:01.467 [2024-12-15 18:36:01.737587] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71262 ] 00:05:01.726 [2024-12-15 18:36:01.911660] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:01.726 [2024-12-15 18:36:01.941254] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.295 18:36:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:02.295 18:36:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:05:02.295 18:36:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:02.295 18:36:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:02.295 18:36:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:02.295 [2024-12-15 18:36:02.577475] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:02.295 request: 00:05:02.295 { 00:05:02.295 "trtype": "tcp", 00:05:02.295 "method": "nvmf_get_transports", 00:05:02.295 "req_id": 1 00:05:02.295 } 00:05:02.295 Got JSON-RPC error response 00:05:02.295 response: 00:05:02.295 { 00:05:02.295 "code": -19, 00:05:02.295 "message": "No such device" 00:05:02.295 } 00:05:02.295 18:36:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:02.295 18:36:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:02.295 18:36:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:02.295 18:36:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:02.295 [2024-12-15 18:36:02.589544] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:02.295 18:36:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:02.295 18:36:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:02.295 18:36:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:02.295 18:36:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:02.554 18:36:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:02.554 18:36:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:02.554 { 00:05:02.554 "subsystems": [ 00:05:02.554 { 00:05:02.554 "subsystem": "fsdev", 00:05:02.554 "config": [ 00:05:02.554 { 00:05:02.554 "method": "fsdev_set_opts", 00:05:02.554 "params": { 00:05:02.554 "fsdev_io_pool_size": 65535, 00:05:02.554 "fsdev_io_cache_size": 256 00:05:02.554 } 00:05:02.554 } 00:05:02.554 ] 00:05:02.554 }, 00:05:02.554 { 00:05:02.554 "subsystem": "keyring", 00:05:02.554 "config": [] 00:05:02.554 }, 00:05:02.554 { 00:05:02.554 "subsystem": "iobuf", 00:05:02.554 "config": [ 00:05:02.554 { 00:05:02.554 "method": "iobuf_set_options", 00:05:02.554 "params": { 00:05:02.554 "small_pool_count": 8192, 00:05:02.554 "large_pool_count": 1024, 00:05:02.554 "small_bufsize": 8192, 00:05:02.554 "large_bufsize": 135168, 00:05:02.554 "enable_numa": false 00:05:02.554 } 00:05:02.554 } 00:05:02.554 ] 00:05:02.554 }, 00:05:02.554 { 00:05:02.554 "subsystem": "sock", 00:05:02.554 "config": [ 00:05:02.554 { 00:05:02.554 "method": "sock_set_default_impl", 00:05:02.554 "params": { 00:05:02.554 "impl_name": "posix" 00:05:02.554 } 00:05:02.554 }, 00:05:02.554 { 00:05:02.554 "method": "sock_impl_set_options", 00:05:02.554 "params": { 00:05:02.554 "impl_name": "ssl", 00:05:02.554 "recv_buf_size": 4096, 00:05:02.554 "send_buf_size": 4096, 00:05:02.554 "enable_recv_pipe": true, 00:05:02.554 "enable_quickack": false, 00:05:02.554 "enable_placement_id": 0, 00:05:02.554 "enable_zerocopy_send_server": true, 00:05:02.554 "enable_zerocopy_send_client": false, 00:05:02.554 "zerocopy_threshold": 0, 00:05:02.554 "tls_version": 0, 00:05:02.554 "enable_ktls": false 00:05:02.554 } 00:05:02.554 }, 00:05:02.554 { 00:05:02.554 "method": "sock_impl_set_options", 00:05:02.554 "params": { 00:05:02.554 "impl_name": "posix", 00:05:02.554 "recv_buf_size": 2097152, 00:05:02.554 "send_buf_size": 2097152, 00:05:02.554 "enable_recv_pipe": true, 00:05:02.554 "enable_quickack": false, 00:05:02.554 "enable_placement_id": 0, 00:05:02.554 "enable_zerocopy_send_server": true, 00:05:02.554 "enable_zerocopy_send_client": false, 00:05:02.554 "zerocopy_threshold": 0, 00:05:02.554 "tls_version": 0, 00:05:02.554 "enable_ktls": false 00:05:02.554 } 00:05:02.554 } 00:05:02.554 ] 00:05:02.554 }, 00:05:02.554 { 00:05:02.554 "subsystem": "vmd", 00:05:02.554 "config": [] 00:05:02.554 }, 00:05:02.555 { 00:05:02.555 "subsystem": "accel", 00:05:02.555 "config": [ 00:05:02.555 { 00:05:02.555 "method": "accel_set_options", 00:05:02.555 "params": { 00:05:02.555 "small_cache_size": 128, 00:05:02.555 "large_cache_size": 16, 00:05:02.555 "task_count": 2048, 00:05:02.555 "sequence_count": 2048, 00:05:02.555 "buf_count": 2048 00:05:02.555 } 00:05:02.555 } 00:05:02.555 ] 00:05:02.555 }, 00:05:02.555 { 00:05:02.555 "subsystem": "bdev", 00:05:02.555 "config": [ 00:05:02.555 { 00:05:02.555 "method": "bdev_set_options", 00:05:02.555 "params": { 00:05:02.555 "bdev_io_pool_size": 65535, 00:05:02.555 "bdev_io_cache_size": 256, 00:05:02.555 "bdev_auto_examine": true, 00:05:02.555 "iobuf_small_cache_size": 128, 00:05:02.555 "iobuf_large_cache_size": 16 00:05:02.555 } 00:05:02.555 }, 00:05:02.555 { 00:05:02.555 "method": "bdev_raid_set_options", 00:05:02.555 "params": { 00:05:02.555 "process_window_size_kb": 1024, 00:05:02.555 "process_max_bandwidth_mb_sec": 0 00:05:02.555 } 00:05:02.555 }, 00:05:02.555 { 00:05:02.555 "method": "bdev_iscsi_set_options", 00:05:02.555 "params": { 00:05:02.555 "timeout_sec": 30 00:05:02.555 } 00:05:02.555 }, 00:05:02.555 { 00:05:02.555 "method": "bdev_nvme_set_options", 00:05:02.555 "params": { 00:05:02.555 "action_on_timeout": "none", 00:05:02.555 "timeout_us": 0, 00:05:02.555 "timeout_admin_us": 0, 00:05:02.555 "keep_alive_timeout_ms": 10000, 00:05:02.555 "arbitration_burst": 0, 00:05:02.555 "low_priority_weight": 0, 00:05:02.555 "medium_priority_weight": 0, 00:05:02.555 "high_priority_weight": 0, 00:05:02.555 "nvme_adminq_poll_period_us": 10000, 00:05:02.555 "nvme_ioq_poll_period_us": 0, 00:05:02.555 "io_queue_requests": 0, 00:05:02.555 "delay_cmd_submit": true, 00:05:02.555 "transport_retry_count": 4, 00:05:02.555 "bdev_retry_count": 3, 00:05:02.555 "transport_ack_timeout": 0, 00:05:02.555 "ctrlr_loss_timeout_sec": 0, 00:05:02.555 "reconnect_delay_sec": 0, 00:05:02.555 "fast_io_fail_timeout_sec": 0, 00:05:02.555 "disable_auto_failback": false, 00:05:02.555 "generate_uuids": false, 00:05:02.555 "transport_tos": 0, 00:05:02.555 "nvme_error_stat": false, 00:05:02.555 "rdma_srq_size": 0, 00:05:02.555 "io_path_stat": false, 00:05:02.555 "allow_accel_sequence": false, 00:05:02.555 "rdma_max_cq_size": 0, 00:05:02.555 "rdma_cm_event_timeout_ms": 0, 00:05:02.555 "dhchap_digests": [ 00:05:02.555 "sha256", 00:05:02.555 "sha384", 00:05:02.555 "sha512" 00:05:02.555 ], 00:05:02.555 "dhchap_dhgroups": [ 00:05:02.555 "null", 00:05:02.555 "ffdhe2048", 00:05:02.555 "ffdhe3072", 00:05:02.555 "ffdhe4096", 00:05:02.555 "ffdhe6144", 00:05:02.555 "ffdhe8192" 00:05:02.555 ], 00:05:02.555 "rdma_umr_per_io": false 00:05:02.555 } 00:05:02.555 }, 00:05:02.555 { 00:05:02.555 "method": "bdev_nvme_set_hotplug", 00:05:02.555 "params": { 00:05:02.555 "period_us": 100000, 00:05:02.555 "enable": false 00:05:02.555 } 00:05:02.555 }, 00:05:02.555 { 00:05:02.555 "method": "bdev_wait_for_examine" 00:05:02.555 } 00:05:02.555 ] 00:05:02.555 }, 00:05:02.555 { 00:05:02.555 "subsystem": "scsi", 00:05:02.555 "config": null 00:05:02.555 }, 00:05:02.555 { 00:05:02.555 "subsystem": "scheduler", 00:05:02.555 "config": [ 00:05:02.555 { 00:05:02.555 "method": "framework_set_scheduler", 00:05:02.555 "params": { 00:05:02.555 "name": "static" 00:05:02.555 } 00:05:02.555 } 00:05:02.555 ] 00:05:02.555 }, 00:05:02.555 { 00:05:02.555 "subsystem": "vhost_scsi", 00:05:02.555 "config": [] 00:05:02.555 }, 00:05:02.555 { 00:05:02.555 "subsystem": "vhost_blk", 00:05:02.555 "config": [] 00:05:02.555 }, 00:05:02.555 { 00:05:02.555 "subsystem": "ublk", 00:05:02.555 "config": [] 00:05:02.555 }, 00:05:02.555 { 00:05:02.555 "subsystem": "nbd", 00:05:02.555 "config": [] 00:05:02.555 }, 00:05:02.555 { 00:05:02.555 "subsystem": "nvmf", 00:05:02.555 "config": [ 00:05:02.555 { 00:05:02.555 "method": "nvmf_set_config", 00:05:02.555 "params": { 00:05:02.555 "discovery_filter": "match_any", 00:05:02.555 "admin_cmd_passthru": { 00:05:02.555 "identify_ctrlr": false 00:05:02.555 }, 00:05:02.555 "dhchap_digests": [ 00:05:02.555 "sha256", 00:05:02.555 "sha384", 00:05:02.555 "sha512" 00:05:02.555 ], 00:05:02.555 "dhchap_dhgroups": [ 00:05:02.555 "null", 00:05:02.555 "ffdhe2048", 00:05:02.555 "ffdhe3072", 00:05:02.555 "ffdhe4096", 00:05:02.555 "ffdhe6144", 00:05:02.555 "ffdhe8192" 00:05:02.555 ] 00:05:02.555 } 00:05:02.555 }, 00:05:02.555 { 00:05:02.555 "method": "nvmf_set_max_subsystems", 00:05:02.555 "params": { 00:05:02.555 "max_subsystems": 1024 00:05:02.555 } 00:05:02.555 }, 00:05:02.555 { 00:05:02.555 "method": "nvmf_set_crdt", 00:05:02.555 "params": { 00:05:02.555 "crdt1": 0, 00:05:02.555 "crdt2": 0, 00:05:02.555 "crdt3": 0 00:05:02.555 } 00:05:02.555 }, 00:05:02.555 { 00:05:02.555 "method": "nvmf_create_transport", 00:05:02.555 "params": { 00:05:02.555 "trtype": "TCP", 00:05:02.555 "max_queue_depth": 128, 00:05:02.555 "max_io_qpairs_per_ctrlr": 127, 00:05:02.555 "in_capsule_data_size": 4096, 00:05:02.555 "max_io_size": 131072, 00:05:02.555 "io_unit_size": 131072, 00:05:02.555 "max_aq_depth": 128, 00:05:02.555 "num_shared_buffers": 511, 00:05:02.555 "buf_cache_size": 4294967295, 00:05:02.555 "dif_insert_or_strip": false, 00:05:02.555 "zcopy": false, 00:05:02.555 "c2h_success": true, 00:05:02.555 "sock_priority": 0, 00:05:02.555 "abort_timeout_sec": 1, 00:05:02.555 "ack_timeout": 0, 00:05:02.555 "data_wr_pool_size": 0 00:05:02.555 } 00:05:02.555 } 00:05:02.555 ] 00:05:02.555 }, 00:05:02.555 { 00:05:02.555 "subsystem": "iscsi", 00:05:02.555 "config": [ 00:05:02.555 { 00:05:02.555 "method": "iscsi_set_options", 00:05:02.555 "params": { 00:05:02.555 "node_base": "iqn.2016-06.io.spdk", 00:05:02.555 "max_sessions": 128, 00:05:02.555 "max_connections_per_session": 2, 00:05:02.555 "max_queue_depth": 64, 00:05:02.555 "default_time2wait": 2, 00:05:02.555 "default_time2retain": 20, 00:05:02.555 "first_burst_length": 8192, 00:05:02.555 "immediate_data": true, 00:05:02.555 "allow_duplicated_isid": false, 00:05:02.555 "error_recovery_level": 0, 00:05:02.555 "nop_timeout": 60, 00:05:02.555 "nop_in_interval": 30, 00:05:02.555 "disable_chap": false, 00:05:02.555 "require_chap": false, 00:05:02.555 "mutual_chap": false, 00:05:02.555 "chap_group": 0, 00:05:02.555 "max_large_datain_per_connection": 64, 00:05:02.555 "max_r2t_per_connection": 4, 00:05:02.555 "pdu_pool_size": 36864, 00:05:02.555 "immediate_data_pool_size": 16384, 00:05:02.555 "data_out_pool_size": 2048 00:05:02.555 } 00:05:02.555 } 00:05:02.555 ] 00:05:02.555 } 00:05:02.555 ] 00:05:02.555 } 00:05:02.555 18:36:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:02.555 18:36:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 71262 00:05:02.555 18:36:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 71262 ']' 00:05:02.555 18:36:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 71262 00:05:02.555 18:36:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:02.555 18:36:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:02.555 18:36:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71262 00:05:02.555 18:36:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:02.555 18:36:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:02.555 18:36:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71262' 00:05:02.555 killing process with pid 71262 00:05:02.555 18:36:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 71262 00:05:02.555 18:36:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 71262 00:05:02.815 18:36:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=71291 00:05:02.815 18:36:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:02.815 18:36:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:08.151 18:36:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 71291 00:05:08.151 18:36:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 71291 ']' 00:05:08.151 18:36:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 71291 00:05:08.151 18:36:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:08.151 18:36:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:08.151 18:36:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71291 00:05:08.152 18:36:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:08.152 killing process with pid 71291 00:05:08.152 18:36:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:08.152 18:36:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71291' 00:05:08.152 18:36:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 71291 00:05:08.152 18:36:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 71291 00:05:08.412 18:36:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:08.412 18:36:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:08.412 00:05:08.412 real 0m6.967s 00:05:08.412 user 0m6.526s 00:05:08.412 sys 0m0.743s 00:05:08.412 18:36:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:08.412 18:36:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:08.412 ************************************ 00:05:08.412 END TEST skip_rpc_with_json 00:05:08.412 ************************************ 00:05:08.412 18:36:08 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:08.412 18:36:08 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:08.412 18:36:08 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:08.412 18:36:08 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:08.412 ************************************ 00:05:08.412 START TEST skip_rpc_with_delay 00:05:08.412 ************************************ 00:05:08.412 18:36:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:05:08.412 18:36:08 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:08.412 18:36:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:05:08.412 18:36:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:08.412 18:36:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:08.412 18:36:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:08.412 18:36:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:08.412 18:36:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:08.412 18:36:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:08.412 18:36:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:08.412 18:36:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:08.412 18:36:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:08.412 18:36:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:08.412 [2024-12-15 18:36:08.778488] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:08.412 18:36:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:05:08.412 18:36:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:08.412 18:36:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:08.412 18:36:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:08.412 00:05:08.412 real 0m0.170s 00:05:08.412 user 0m0.080s 00:05:08.412 sys 0m0.088s 00:05:08.412 18:36:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:08.412 ************************************ 00:05:08.412 END TEST skip_rpc_with_delay 00:05:08.412 ************************************ 00:05:08.412 18:36:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:08.672 18:36:08 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:08.672 18:36:08 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:08.672 18:36:08 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:08.672 18:36:08 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:08.672 18:36:08 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:08.672 18:36:08 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:08.672 ************************************ 00:05:08.672 START TEST exit_on_failed_rpc_init 00:05:08.672 ************************************ 00:05:08.672 18:36:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:05:08.672 18:36:08 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=71398 00:05:08.672 18:36:08 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:08.672 18:36:08 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 71398 00:05:08.672 18:36:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 71398 ']' 00:05:08.672 18:36:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:08.672 18:36:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:08.672 18:36:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:08.672 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:08.672 18:36:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:08.672 18:36:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:08.672 [2024-12-15 18:36:09.022386] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:05:08.672 [2024-12-15 18:36:09.022521] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71398 ] 00:05:08.931 [2024-12-15 18:36:09.176543] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:08.931 [2024-12-15 18:36:09.207061] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.500 18:36:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:09.500 18:36:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:05:09.500 18:36:09 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:09.500 18:36:09 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:09.500 18:36:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:05:09.500 18:36:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:09.500 18:36:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:09.500 18:36:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:09.500 18:36:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:09.500 18:36:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:09.500 18:36:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:09.500 18:36:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:09.500 18:36:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:09.500 18:36:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:09.500 18:36:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:09.760 [2024-12-15 18:36:09.975554] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:05:09.760 [2024-12-15 18:36:09.975776] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71415 ] 00:05:09.760 [2024-12-15 18:36:10.151280] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:09.760 [2024-12-15 18:36:10.181116] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:09.760 [2024-12-15 18:36:10.181301] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:09.760 [2024-12-15 18:36:10.181358] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:09.760 [2024-12-15 18:36:10.181424] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:10.020 18:36:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:05:10.020 18:36:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:10.020 18:36:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:05:10.020 18:36:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:05:10.020 18:36:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:05:10.020 18:36:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:10.020 18:36:10 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:10.020 18:36:10 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 71398 00:05:10.020 18:36:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 71398 ']' 00:05:10.020 18:36:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 71398 00:05:10.020 18:36:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:05:10.020 18:36:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:10.020 18:36:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71398 00:05:10.020 18:36:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:10.020 killing process with pid 71398 00:05:10.020 18:36:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:10.020 18:36:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71398' 00:05:10.020 18:36:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 71398 00:05:10.020 18:36:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 71398 00:05:10.280 ************************************ 00:05:10.281 END TEST exit_on_failed_rpc_init 00:05:10.281 ************************************ 00:05:10.281 00:05:10.281 real 0m1.783s 00:05:10.281 user 0m1.955s 00:05:10.281 sys 0m0.502s 00:05:10.281 18:36:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:10.281 18:36:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:10.541 18:36:10 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:10.541 00:05:10.541 real 0m14.863s 00:05:10.541 user 0m13.807s 00:05:10.541 sys 0m1.968s 00:05:10.541 ************************************ 00:05:10.541 END TEST skip_rpc 00:05:10.541 ************************************ 00:05:10.541 18:36:10 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:10.541 18:36:10 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:10.541 18:36:10 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:10.541 18:36:10 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:10.541 18:36:10 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:10.541 18:36:10 -- common/autotest_common.sh@10 -- # set +x 00:05:10.541 ************************************ 00:05:10.541 START TEST rpc_client 00:05:10.541 ************************************ 00:05:10.541 18:36:10 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:10.541 * Looking for test storage... 00:05:10.541 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:10.541 18:36:10 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:10.541 18:36:10 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:05:10.541 18:36:10 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:10.802 18:36:11 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:10.802 18:36:11 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:10.802 18:36:11 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:10.802 18:36:11 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:10.802 18:36:11 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:10.802 18:36:11 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:10.802 18:36:11 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:10.802 18:36:11 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:10.802 18:36:11 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:10.802 18:36:11 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:10.802 18:36:11 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:10.802 18:36:11 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:10.802 18:36:11 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:10.802 18:36:11 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:10.802 18:36:11 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:10.802 18:36:11 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:10.802 18:36:11 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:10.802 18:36:11 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:10.802 18:36:11 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:10.802 18:36:11 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:10.802 18:36:11 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:10.802 18:36:11 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:10.802 18:36:11 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:10.802 18:36:11 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:10.802 18:36:11 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:10.802 18:36:11 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:10.802 18:36:11 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:10.802 18:36:11 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:10.802 18:36:11 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:10.802 18:36:11 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:10.802 18:36:11 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:10.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.802 --rc genhtml_branch_coverage=1 00:05:10.802 --rc genhtml_function_coverage=1 00:05:10.802 --rc genhtml_legend=1 00:05:10.802 --rc geninfo_all_blocks=1 00:05:10.802 --rc geninfo_unexecuted_blocks=1 00:05:10.802 00:05:10.802 ' 00:05:10.802 18:36:11 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:10.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.802 --rc genhtml_branch_coverage=1 00:05:10.802 --rc genhtml_function_coverage=1 00:05:10.802 --rc genhtml_legend=1 00:05:10.802 --rc geninfo_all_blocks=1 00:05:10.802 --rc geninfo_unexecuted_blocks=1 00:05:10.802 00:05:10.802 ' 00:05:10.802 18:36:11 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:10.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.802 --rc genhtml_branch_coverage=1 00:05:10.802 --rc genhtml_function_coverage=1 00:05:10.802 --rc genhtml_legend=1 00:05:10.802 --rc geninfo_all_blocks=1 00:05:10.802 --rc geninfo_unexecuted_blocks=1 00:05:10.802 00:05:10.802 ' 00:05:10.802 18:36:11 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:10.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.802 --rc genhtml_branch_coverage=1 00:05:10.802 --rc genhtml_function_coverage=1 00:05:10.802 --rc genhtml_legend=1 00:05:10.802 --rc geninfo_all_blocks=1 00:05:10.802 --rc geninfo_unexecuted_blocks=1 00:05:10.802 00:05:10.802 ' 00:05:10.802 18:36:11 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:10.802 OK 00:05:10.802 18:36:11 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:10.802 00:05:10.802 real 0m0.279s 00:05:10.802 user 0m0.170s 00:05:10.802 sys 0m0.125s 00:05:10.802 18:36:11 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:10.802 18:36:11 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:10.802 ************************************ 00:05:10.802 END TEST rpc_client 00:05:10.802 ************************************ 00:05:10.802 18:36:11 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:10.802 18:36:11 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:10.802 18:36:11 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:10.802 18:36:11 -- common/autotest_common.sh@10 -- # set +x 00:05:10.802 ************************************ 00:05:10.802 START TEST json_config 00:05:10.802 ************************************ 00:05:10.802 18:36:11 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:10.802 18:36:11 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:10.802 18:36:11 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:05:10.802 18:36:11 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:11.063 18:36:11 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:11.063 18:36:11 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:11.063 18:36:11 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:11.063 18:36:11 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:11.063 18:36:11 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:11.063 18:36:11 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:11.063 18:36:11 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:11.063 18:36:11 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:11.063 18:36:11 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:11.063 18:36:11 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:11.063 18:36:11 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:11.063 18:36:11 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:11.063 18:36:11 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:11.063 18:36:11 json_config -- scripts/common.sh@345 -- # : 1 00:05:11.063 18:36:11 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:11.063 18:36:11 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:11.063 18:36:11 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:11.063 18:36:11 json_config -- scripts/common.sh@353 -- # local d=1 00:05:11.063 18:36:11 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:11.063 18:36:11 json_config -- scripts/common.sh@355 -- # echo 1 00:05:11.063 18:36:11 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:11.063 18:36:11 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:11.063 18:36:11 json_config -- scripts/common.sh@353 -- # local d=2 00:05:11.063 18:36:11 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:11.063 18:36:11 json_config -- scripts/common.sh@355 -- # echo 2 00:05:11.063 18:36:11 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:11.063 18:36:11 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:11.063 18:36:11 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:11.063 18:36:11 json_config -- scripts/common.sh@368 -- # return 0 00:05:11.063 18:36:11 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:11.063 18:36:11 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:11.063 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.063 --rc genhtml_branch_coverage=1 00:05:11.063 --rc genhtml_function_coverage=1 00:05:11.063 --rc genhtml_legend=1 00:05:11.063 --rc geninfo_all_blocks=1 00:05:11.063 --rc geninfo_unexecuted_blocks=1 00:05:11.063 00:05:11.063 ' 00:05:11.064 18:36:11 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:11.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.064 --rc genhtml_branch_coverage=1 00:05:11.064 --rc genhtml_function_coverage=1 00:05:11.064 --rc genhtml_legend=1 00:05:11.064 --rc geninfo_all_blocks=1 00:05:11.064 --rc geninfo_unexecuted_blocks=1 00:05:11.064 00:05:11.064 ' 00:05:11.064 18:36:11 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:11.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.064 --rc genhtml_branch_coverage=1 00:05:11.064 --rc genhtml_function_coverage=1 00:05:11.064 --rc genhtml_legend=1 00:05:11.064 --rc geninfo_all_blocks=1 00:05:11.064 --rc geninfo_unexecuted_blocks=1 00:05:11.064 00:05:11.064 ' 00:05:11.064 18:36:11 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:11.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.064 --rc genhtml_branch_coverage=1 00:05:11.064 --rc genhtml_function_coverage=1 00:05:11.064 --rc genhtml_legend=1 00:05:11.064 --rc geninfo_all_blocks=1 00:05:11.064 --rc geninfo_unexecuted_blocks=1 00:05:11.064 00:05:11.064 ' 00:05:11.064 18:36:11 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:11.064 18:36:11 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:11.064 18:36:11 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:11.064 18:36:11 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:11.064 18:36:11 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:11.064 18:36:11 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:11.064 18:36:11 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:11.064 18:36:11 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:11.064 18:36:11 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:11.064 18:36:11 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:11.064 18:36:11 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:11.064 18:36:11 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:11.064 18:36:11 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f6060331-514e-448e-9fbd-57198c1fa4b2 00:05:11.064 18:36:11 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=f6060331-514e-448e-9fbd-57198c1fa4b2 00:05:11.064 18:36:11 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:11.064 18:36:11 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:11.064 18:36:11 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:11.064 18:36:11 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:11.064 18:36:11 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:11.064 18:36:11 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:11.064 18:36:11 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:11.064 18:36:11 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:11.064 18:36:11 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:11.064 18:36:11 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:11.064 18:36:11 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:11.064 18:36:11 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:11.064 18:36:11 json_config -- paths/export.sh@5 -- # export PATH 00:05:11.064 18:36:11 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:11.064 18:36:11 json_config -- nvmf/common.sh@51 -- # : 0 00:05:11.064 18:36:11 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:11.064 18:36:11 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:11.064 18:36:11 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:11.064 18:36:11 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:11.064 18:36:11 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:11.064 18:36:11 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:11.064 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:11.064 18:36:11 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:11.064 18:36:11 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:11.064 18:36:11 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:11.064 18:36:11 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:11.064 18:36:11 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:11.064 18:36:11 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:11.064 18:36:11 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:11.064 18:36:11 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:11.064 18:36:11 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:05:11.064 WARNING: No tests are enabled so not running JSON configuration tests 00:05:11.064 18:36:11 json_config -- json_config/json_config.sh@28 -- # exit 0 00:05:11.064 00:05:11.064 real 0m0.207s 00:05:11.064 user 0m0.122s 00:05:11.064 sys 0m0.089s 00:05:11.064 18:36:11 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:11.064 18:36:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:11.064 ************************************ 00:05:11.064 END TEST json_config 00:05:11.064 ************************************ 00:05:11.064 18:36:11 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:11.064 18:36:11 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:11.064 18:36:11 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:11.064 18:36:11 -- common/autotest_common.sh@10 -- # set +x 00:05:11.064 ************************************ 00:05:11.064 START TEST json_config_extra_key 00:05:11.064 ************************************ 00:05:11.064 18:36:11 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:11.064 18:36:11 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:11.064 18:36:11 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:05:11.064 18:36:11 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:11.325 18:36:11 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:11.325 18:36:11 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:11.325 18:36:11 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:11.325 18:36:11 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:11.325 18:36:11 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:11.325 18:36:11 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:11.325 18:36:11 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:11.325 18:36:11 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:11.325 18:36:11 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:11.325 18:36:11 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:11.325 18:36:11 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:11.325 18:36:11 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:11.325 18:36:11 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:11.325 18:36:11 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:11.325 18:36:11 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:11.325 18:36:11 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:11.325 18:36:11 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:11.325 18:36:11 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:11.325 18:36:11 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:11.325 18:36:11 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:11.325 18:36:11 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:11.325 18:36:11 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:11.325 18:36:11 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:11.325 18:36:11 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:11.325 18:36:11 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:11.325 18:36:11 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:11.325 18:36:11 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:11.325 18:36:11 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:11.325 18:36:11 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:11.325 18:36:11 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:11.325 18:36:11 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:11.325 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.325 --rc genhtml_branch_coverage=1 00:05:11.325 --rc genhtml_function_coverage=1 00:05:11.325 --rc genhtml_legend=1 00:05:11.325 --rc geninfo_all_blocks=1 00:05:11.325 --rc geninfo_unexecuted_blocks=1 00:05:11.325 00:05:11.325 ' 00:05:11.325 18:36:11 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:11.325 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.325 --rc genhtml_branch_coverage=1 00:05:11.325 --rc genhtml_function_coverage=1 00:05:11.325 --rc genhtml_legend=1 00:05:11.325 --rc geninfo_all_blocks=1 00:05:11.325 --rc geninfo_unexecuted_blocks=1 00:05:11.325 00:05:11.325 ' 00:05:11.325 18:36:11 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:11.325 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.325 --rc genhtml_branch_coverage=1 00:05:11.325 --rc genhtml_function_coverage=1 00:05:11.325 --rc genhtml_legend=1 00:05:11.325 --rc geninfo_all_blocks=1 00:05:11.325 --rc geninfo_unexecuted_blocks=1 00:05:11.325 00:05:11.325 ' 00:05:11.325 18:36:11 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:11.325 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.325 --rc genhtml_branch_coverage=1 00:05:11.325 --rc genhtml_function_coverage=1 00:05:11.325 --rc genhtml_legend=1 00:05:11.325 --rc geninfo_all_blocks=1 00:05:11.325 --rc geninfo_unexecuted_blocks=1 00:05:11.325 00:05:11.325 ' 00:05:11.325 18:36:11 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:11.325 18:36:11 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:11.325 18:36:11 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:11.325 18:36:11 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:11.325 18:36:11 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:11.325 18:36:11 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:11.325 18:36:11 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:11.325 18:36:11 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:11.325 18:36:11 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:11.325 18:36:11 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:11.325 18:36:11 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:11.325 18:36:11 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:11.325 18:36:11 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f6060331-514e-448e-9fbd-57198c1fa4b2 00:05:11.325 18:36:11 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=f6060331-514e-448e-9fbd-57198c1fa4b2 00:05:11.325 18:36:11 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:11.325 18:36:11 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:11.325 18:36:11 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:11.325 18:36:11 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:11.325 18:36:11 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:11.326 18:36:11 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:11.326 18:36:11 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:11.326 18:36:11 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:11.326 18:36:11 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:11.326 18:36:11 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:11.326 18:36:11 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:11.326 18:36:11 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:11.326 18:36:11 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:11.326 18:36:11 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:11.326 18:36:11 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:11.326 18:36:11 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:11.326 18:36:11 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:11.326 18:36:11 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:11.326 18:36:11 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:11.326 18:36:11 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:11.326 18:36:11 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:11.326 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:11.326 18:36:11 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:11.326 18:36:11 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:11.326 18:36:11 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:11.326 18:36:11 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:11.326 18:36:11 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:11.326 18:36:11 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:11.326 18:36:11 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:11.326 18:36:11 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:11.326 18:36:11 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:11.326 18:36:11 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:11.326 18:36:11 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:11.326 18:36:11 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:11.326 18:36:11 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:11.326 18:36:11 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:11.326 INFO: launching applications... 00:05:11.326 18:36:11 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:11.326 18:36:11 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:11.326 18:36:11 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:11.326 18:36:11 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:11.326 18:36:11 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:11.326 18:36:11 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:11.326 18:36:11 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:11.326 18:36:11 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:11.326 18:36:11 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=71603 00:05:11.326 18:36:11 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:11.326 Waiting for target to run... 00:05:11.326 18:36:11 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 71603 /var/tmp/spdk_tgt.sock 00:05:11.326 18:36:11 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:11.326 18:36:11 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 71603 ']' 00:05:11.326 18:36:11 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:11.326 18:36:11 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:11.326 18:36:11 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:11.326 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:11.326 18:36:11 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:11.326 18:36:11 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:11.326 [2024-12-15 18:36:11.739021] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:05:11.326 [2024-12-15 18:36:11.739267] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71603 ] 00:05:11.895 [2024-12-15 18:36:12.126864] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:11.895 [2024-12-15 18:36:12.146310] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.155 18:36:12 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:12.155 18:36:12 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:05:12.155 18:36:12 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:12.155 00:05:12.155 18:36:12 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:12.155 INFO: shutting down applications... 00:05:12.415 18:36:12 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:12.415 18:36:12 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:12.415 18:36:12 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:12.415 18:36:12 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 71603 ]] 00:05:12.415 18:36:12 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 71603 00:05:12.415 18:36:12 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:12.415 18:36:12 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:12.415 18:36:12 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 71603 00:05:12.415 18:36:12 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:12.675 18:36:13 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:12.675 18:36:13 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:12.675 18:36:13 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 71603 00:05:12.675 18:36:13 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:12.675 SPDK target shutdown done 00:05:12.675 18:36:13 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:12.675 18:36:13 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:12.675 18:36:13 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:12.675 18:36:13 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:12.675 Success 00:05:12.675 00:05:12.675 real 0m1.691s 00:05:12.675 user 0m1.413s 00:05:12.675 sys 0m0.505s 00:05:12.675 18:36:13 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:12.675 18:36:13 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:12.675 ************************************ 00:05:12.675 END TEST json_config_extra_key 00:05:12.675 ************************************ 00:05:12.935 18:36:13 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:12.935 18:36:13 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:12.935 18:36:13 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:12.935 18:36:13 -- common/autotest_common.sh@10 -- # set +x 00:05:12.935 ************************************ 00:05:12.935 START TEST alias_rpc 00:05:12.935 ************************************ 00:05:12.935 18:36:13 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:12.935 * Looking for test storage... 00:05:12.935 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:12.935 18:36:13 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:12.935 18:36:13 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:05:12.935 18:36:13 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:12.935 18:36:13 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:12.935 18:36:13 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:12.935 18:36:13 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:12.935 18:36:13 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:12.935 18:36:13 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:12.935 18:36:13 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:12.935 18:36:13 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:12.935 18:36:13 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:12.935 18:36:13 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:12.935 18:36:13 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:12.935 18:36:13 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:12.935 18:36:13 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:13.195 18:36:13 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:13.195 18:36:13 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:13.195 18:36:13 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:13.195 18:36:13 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:13.195 18:36:13 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:13.195 18:36:13 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:13.195 18:36:13 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:13.195 18:36:13 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:13.195 18:36:13 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:13.195 18:36:13 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:13.195 18:36:13 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:13.195 18:36:13 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:13.195 18:36:13 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:13.195 18:36:13 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:13.195 18:36:13 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:13.195 18:36:13 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:13.195 18:36:13 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:13.195 18:36:13 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:13.195 18:36:13 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:13.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.195 --rc genhtml_branch_coverage=1 00:05:13.195 --rc genhtml_function_coverage=1 00:05:13.195 --rc genhtml_legend=1 00:05:13.195 --rc geninfo_all_blocks=1 00:05:13.195 --rc geninfo_unexecuted_blocks=1 00:05:13.195 00:05:13.195 ' 00:05:13.195 18:36:13 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:13.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.195 --rc genhtml_branch_coverage=1 00:05:13.195 --rc genhtml_function_coverage=1 00:05:13.195 --rc genhtml_legend=1 00:05:13.195 --rc geninfo_all_blocks=1 00:05:13.195 --rc geninfo_unexecuted_blocks=1 00:05:13.195 00:05:13.195 ' 00:05:13.195 18:36:13 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:13.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.195 --rc genhtml_branch_coverage=1 00:05:13.195 --rc genhtml_function_coverage=1 00:05:13.195 --rc genhtml_legend=1 00:05:13.195 --rc geninfo_all_blocks=1 00:05:13.195 --rc geninfo_unexecuted_blocks=1 00:05:13.195 00:05:13.195 ' 00:05:13.195 18:36:13 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:13.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.195 --rc genhtml_branch_coverage=1 00:05:13.195 --rc genhtml_function_coverage=1 00:05:13.195 --rc genhtml_legend=1 00:05:13.195 --rc geninfo_all_blocks=1 00:05:13.195 --rc geninfo_unexecuted_blocks=1 00:05:13.195 00:05:13.195 ' 00:05:13.195 18:36:13 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:13.195 18:36:13 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=71682 00:05:13.195 18:36:13 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:13.195 18:36:13 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 71682 00:05:13.195 18:36:13 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 71682 ']' 00:05:13.195 18:36:13 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:13.195 18:36:13 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:13.195 18:36:13 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:13.195 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:13.195 18:36:13 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:13.195 18:36:13 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:13.195 [2024-12-15 18:36:13.492979] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:05:13.195 [2024-12-15 18:36:13.493259] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71682 ] 00:05:13.455 [2024-12-15 18:36:13.649245] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:13.455 [2024-12-15 18:36:13.679444] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.023 18:36:14 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:14.023 18:36:14 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:14.023 18:36:14 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:14.281 18:36:14 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 71682 00:05:14.281 18:36:14 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 71682 ']' 00:05:14.281 18:36:14 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 71682 00:05:14.281 18:36:14 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:05:14.281 18:36:14 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:14.281 18:36:14 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71682 00:05:14.281 killing process with pid 71682 00:05:14.281 18:36:14 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:14.281 18:36:14 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:14.281 18:36:14 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71682' 00:05:14.281 18:36:14 alias_rpc -- common/autotest_common.sh@973 -- # kill 71682 00:05:14.281 18:36:14 alias_rpc -- common/autotest_common.sh@978 -- # wait 71682 00:05:14.850 ************************************ 00:05:14.850 END TEST alias_rpc 00:05:14.850 ************************************ 00:05:14.850 00:05:14.850 real 0m1.843s 00:05:14.850 user 0m1.914s 00:05:14.850 sys 0m0.540s 00:05:14.850 18:36:15 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:14.850 18:36:15 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:14.850 18:36:15 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:14.850 18:36:15 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:14.850 18:36:15 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:14.850 18:36:15 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:14.850 18:36:15 -- common/autotest_common.sh@10 -- # set +x 00:05:14.850 ************************************ 00:05:14.850 START TEST spdkcli_tcp 00:05:14.850 ************************************ 00:05:14.850 18:36:15 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:14.850 * Looking for test storage... 00:05:14.850 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:14.850 18:36:15 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:14.850 18:36:15 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:05:14.850 18:36:15 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:14.850 18:36:15 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:14.850 18:36:15 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:14.850 18:36:15 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:14.850 18:36:15 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:14.850 18:36:15 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:14.850 18:36:15 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:14.850 18:36:15 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:14.850 18:36:15 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:14.850 18:36:15 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:14.850 18:36:15 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:14.850 18:36:15 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:14.850 18:36:15 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:14.850 18:36:15 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:14.850 18:36:15 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:14.850 18:36:15 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:14.850 18:36:15 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:14.850 18:36:15 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:14.850 18:36:15 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:14.850 18:36:15 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:14.850 18:36:15 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:14.850 18:36:15 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:15.110 18:36:15 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:15.110 18:36:15 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:15.110 18:36:15 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:15.110 18:36:15 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:15.110 18:36:15 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:15.110 18:36:15 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:15.110 18:36:15 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:15.110 18:36:15 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:15.110 18:36:15 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:15.110 18:36:15 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:15.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.110 --rc genhtml_branch_coverage=1 00:05:15.110 --rc genhtml_function_coverage=1 00:05:15.110 --rc genhtml_legend=1 00:05:15.110 --rc geninfo_all_blocks=1 00:05:15.110 --rc geninfo_unexecuted_blocks=1 00:05:15.110 00:05:15.110 ' 00:05:15.110 18:36:15 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:15.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.110 --rc genhtml_branch_coverage=1 00:05:15.110 --rc genhtml_function_coverage=1 00:05:15.110 --rc genhtml_legend=1 00:05:15.110 --rc geninfo_all_blocks=1 00:05:15.110 --rc geninfo_unexecuted_blocks=1 00:05:15.110 00:05:15.110 ' 00:05:15.110 18:36:15 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:15.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.110 --rc genhtml_branch_coverage=1 00:05:15.110 --rc genhtml_function_coverage=1 00:05:15.110 --rc genhtml_legend=1 00:05:15.110 --rc geninfo_all_blocks=1 00:05:15.110 --rc geninfo_unexecuted_blocks=1 00:05:15.110 00:05:15.110 ' 00:05:15.110 18:36:15 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:15.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.110 --rc genhtml_branch_coverage=1 00:05:15.110 --rc genhtml_function_coverage=1 00:05:15.110 --rc genhtml_legend=1 00:05:15.110 --rc geninfo_all_blocks=1 00:05:15.110 --rc geninfo_unexecuted_blocks=1 00:05:15.110 00:05:15.110 ' 00:05:15.110 18:36:15 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:15.110 18:36:15 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:15.110 18:36:15 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:15.110 18:36:15 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:15.110 18:36:15 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:15.110 18:36:15 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:15.110 18:36:15 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:15.110 18:36:15 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:15.110 18:36:15 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:15.110 18:36:15 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=71767 00:05:15.110 18:36:15 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:15.110 18:36:15 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 71767 00:05:15.110 18:36:15 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 71767 ']' 00:05:15.110 18:36:15 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:15.110 18:36:15 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:15.110 18:36:15 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:15.110 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:15.110 18:36:15 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:15.110 18:36:15 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:15.110 [2024-12-15 18:36:15.406074] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:05:15.110 [2024-12-15 18:36:15.406297] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71767 ] 00:05:15.369 [2024-12-15 18:36:15.579763] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:15.369 [2024-12-15 18:36:15.609276] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.369 [2024-12-15 18:36:15.609366] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:15.935 18:36:16 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:15.936 18:36:16 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:05:15.936 18:36:16 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:15.936 18:36:16 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=71773 00:05:15.936 18:36:16 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:16.194 [ 00:05:16.194 "bdev_malloc_delete", 00:05:16.194 "bdev_malloc_create", 00:05:16.194 "bdev_null_resize", 00:05:16.194 "bdev_null_delete", 00:05:16.194 "bdev_null_create", 00:05:16.194 "bdev_nvme_cuse_unregister", 00:05:16.194 "bdev_nvme_cuse_register", 00:05:16.194 "bdev_opal_new_user", 00:05:16.194 "bdev_opal_set_lock_state", 00:05:16.194 "bdev_opal_delete", 00:05:16.194 "bdev_opal_get_info", 00:05:16.194 "bdev_opal_create", 00:05:16.194 "bdev_nvme_opal_revert", 00:05:16.194 "bdev_nvme_opal_init", 00:05:16.194 "bdev_nvme_send_cmd", 00:05:16.194 "bdev_nvme_set_keys", 00:05:16.194 "bdev_nvme_get_path_iostat", 00:05:16.194 "bdev_nvme_get_mdns_discovery_info", 00:05:16.194 "bdev_nvme_stop_mdns_discovery", 00:05:16.194 "bdev_nvme_start_mdns_discovery", 00:05:16.194 "bdev_nvme_set_multipath_policy", 00:05:16.194 "bdev_nvme_set_preferred_path", 00:05:16.194 "bdev_nvme_get_io_paths", 00:05:16.194 "bdev_nvme_remove_error_injection", 00:05:16.194 "bdev_nvme_add_error_injection", 00:05:16.194 "bdev_nvme_get_discovery_info", 00:05:16.194 "bdev_nvme_stop_discovery", 00:05:16.194 "bdev_nvme_start_discovery", 00:05:16.194 "bdev_nvme_get_controller_health_info", 00:05:16.194 "bdev_nvme_disable_controller", 00:05:16.194 "bdev_nvme_enable_controller", 00:05:16.194 "bdev_nvme_reset_controller", 00:05:16.194 "bdev_nvme_get_transport_statistics", 00:05:16.194 "bdev_nvme_apply_firmware", 00:05:16.194 "bdev_nvme_detach_controller", 00:05:16.194 "bdev_nvme_get_controllers", 00:05:16.194 "bdev_nvme_attach_controller", 00:05:16.194 "bdev_nvme_set_hotplug", 00:05:16.194 "bdev_nvme_set_options", 00:05:16.194 "bdev_passthru_delete", 00:05:16.194 "bdev_passthru_create", 00:05:16.194 "bdev_lvol_set_parent_bdev", 00:05:16.194 "bdev_lvol_set_parent", 00:05:16.194 "bdev_lvol_check_shallow_copy", 00:05:16.194 "bdev_lvol_start_shallow_copy", 00:05:16.194 "bdev_lvol_grow_lvstore", 00:05:16.194 "bdev_lvol_get_lvols", 00:05:16.194 "bdev_lvol_get_lvstores", 00:05:16.194 "bdev_lvol_delete", 00:05:16.194 "bdev_lvol_set_read_only", 00:05:16.194 "bdev_lvol_resize", 00:05:16.194 "bdev_lvol_decouple_parent", 00:05:16.194 "bdev_lvol_inflate", 00:05:16.194 "bdev_lvol_rename", 00:05:16.194 "bdev_lvol_clone_bdev", 00:05:16.194 "bdev_lvol_clone", 00:05:16.194 "bdev_lvol_snapshot", 00:05:16.194 "bdev_lvol_create", 00:05:16.194 "bdev_lvol_delete_lvstore", 00:05:16.194 "bdev_lvol_rename_lvstore", 00:05:16.194 "bdev_lvol_create_lvstore", 00:05:16.194 "bdev_raid_set_options", 00:05:16.194 "bdev_raid_remove_base_bdev", 00:05:16.194 "bdev_raid_add_base_bdev", 00:05:16.194 "bdev_raid_delete", 00:05:16.194 "bdev_raid_create", 00:05:16.194 "bdev_raid_get_bdevs", 00:05:16.194 "bdev_error_inject_error", 00:05:16.194 "bdev_error_delete", 00:05:16.194 "bdev_error_create", 00:05:16.194 "bdev_split_delete", 00:05:16.194 "bdev_split_create", 00:05:16.194 "bdev_delay_delete", 00:05:16.194 "bdev_delay_create", 00:05:16.194 "bdev_delay_update_latency", 00:05:16.194 "bdev_zone_block_delete", 00:05:16.194 "bdev_zone_block_create", 00:05:16.194 "blobfs_create", 00:05:16.194 "blobfs_detect", 00:05:16.194 "blobfs_set_cache_size", 00:05:16.194 "bdev_aio_delete", 00:05:16.194 "bdev_aio_rescan", 00:05:16.194 "bdev_aio_create", 00:05:16.194 "bdev_ftl_set_property", 00:05:16.194 "bdev_ftl_get_properties", 00:05:16.194 "bdev_ftl_get_stats", 00:05:16.194 "bdev_ftl_unmap", 00:05:16.194 "bdev_ftl_unload", 00:05:16.194 "bdev_ftl_delete", 00:05:16.194 "bdev_ftl_load", 00:05:16.194 "bdev_ftl_create", 00:05:16.194 "bdev_virtio_attach_controller", 00:05:16.194 "bdev_virtio_scsi_get_devices", 00:05:16.194 "bdev_virtio_detach_controller", 00:05:16.194 "bdev_virtio_blk_set_hotplug", 00:05:16.194 "bdev_iscsi_delete", 00:05:16.194 "bdev_iscsi_create", 00:05:16.194 "bdev_iscsi_set_options", 00:05:16.194 "accel_error_inject_error", 00:05:16.194 "ioat_scan_accel_module", 00:05:16.194 "dsa_scan_accel_module", 00:05:16.194 "iaa_scan_accel_module", 00:05:16.194 "keyring_file_remove_key", 00:05:16.194 "keyring_file_add_key", 00:05:16.194 "keyring_linux_set_options", 00:05:16.194 "fsdev_aio_delete", 00:05:16.194 "fsdev_aio_create", 00:05:16.194 "iscsi_get_histogram", 00:05:16.194 "iscsi_enable_histogram", 00:05:16.194 "iscsi_set_options", 00:05:16.194 "iscsi_get_auth_groups", 00:05:16.194 "iscsi_auth_group_remove_secret", 00:05:16.194 "iscsi_auth_group_add_secret", 00:05:16.194 "iscsi_delete_auth_group", 00:05:16.194 "iscsi_create_auth_group", 00:05:16.194 "iscsi_set_discovery_auth", 00:05:16.194 "iscsi_get_options", 00:05:16.194 "iscsi_target_node_request_logout", 00:05:16.194 "iscsi_target_node_set_redirect", 00:05:16.194 "iscsi_target_node_set_auth", 00:05:16.194 "iscsi_target_node_add_lun", 00:05:16.194 "iscsi_get_stats", 00:05:16.194 "iscsi_get_connections", 00:05:16.194 "iscsi_portal_group_set_auth", 00:05:16.194 "iscsi_start_portal_group", 00:05:16.194 "iscsi_delete_portal_group", 00:05:16.194 "iscsi_create_portal_group", 00:05:16.195 "iscsi_get_portal_groups", 00:05:16.195 "iscsi_delete_target_node", 00:05:16.195 "iscsi_target_node_remove_pg_ig_maps", 00:05:16.195 "iscsi_target_node_add_pg_ig_maps", 00:05:16.195 "iscsi_create_target_node", 00:05:16.195 "iscsi_get_target_nodes", 00:05:16.195 "iscsi_delete_initiator_group", 00:05:16.195 "iscsi_initiator_group_remove_initiators", 00:05:16.195 "iscsi_initiator_group_add_initiators", 00:05:16.195 "iscsi_create_initiator_group", 00:05:16.195 "iscsi_get_initiator_groups", 00:05:16.195 "nvmf_set_crdt", 00:05:16.195 "nvmf_set_config", 00:05:16.195 "nvmf_set_max_subsystems", 00:05:16.195 "nvmf_stop_mdns_prr", 00:05:16.195 "nvmf_publish_mdns_prr", 00:05:16.195 "nvmf_subsystem_get_listeners", 00:05:16.195 "nvmf_subsystem_get_qpairs", 00:05:16.195 "nvmf_subsystem_get_controllers", 00:05:16.195 "nvmf_get_stats", 00:05:16.195 "nvmf_get_transports", 00:05:16.195 "nvmf_create_transport", 00:05:16.195 "nvmf_get_targets", 00:05:16.195 "nvmf_delete_target", 00:05:16.195 "nvmf_create_target", 00:05:16.195 "nvmf_subsystem_allow_any_host", 00:05:16.195 "nvmf_subsystem_set_keys", 00:05:16.195 "nvmf_subsystem_remove_host", 00:05:16.195 "nvmf_subsystem_add_host", 00:05:16.195 "nvmf_ns_remove_host", 00:05:16.195 "nvmf_ns_add_host", 00:05:16.195 "nvmf_subsystem_remove_ns", 00:05:16.195 "nvmf_subsystem_set_ns_ana_group", 00:05:16.195 "nvmf_subsystem_add_ns", 00:05:16.195 "nvmf_subsystem_listener_set_ana_state", 00:05:16.195 "nvmf_discovery_get_referrals", 00:05:16.195 "nvmf_discovery_remove_referral", 00:05:16.195 "nvmf_discovery_add_referral", 00:05:16.195 "nvmf_subsystem_remove_listener", 00:05:16.195 "nvmf_subsystem_add_listener", 00:05:16.195 "nvmf_delete_subsystem", 00:05:16.195 "nvmf_create_subsystem", 00:05:16.195 "nvmf_get_subsystems", 00:05:16.195 "env_dpdk_get_mem_stats", 00:05:16.195 "nbd_get_disks", 00:05:16.195 "nbd_stop_disk", 00:05:16.195 "nbd_start_disk", 00:05:16.195 "ublk_recover_disk", 00:05:16.195 "ublk_get_disks", 00:05:16.195 "ublk_stop_disk", 00:05:16.195 "ublk_start_disk", 00:05:16.195 "ublk_destroy_target", 00:05:16.195 "ublk_create_target", 00:05:16.195 "virtio_blk_create_transport", 00:05:16.195 "virtio_blk_get_transports", 00:05:16.195 "vhost_controller_set_coalescing", 00:05:16.195 "vhost_get_controllers", 00:05:16.195 "vhost_delete_controller", 00:05:16.195 "vhost_create_blk_controller", 00:05:16.195 "vhost_scsi_controller_remove_target", 00:05:16.195 "vhost_scsi_controller_add_target", 00:05:16.195 "vhost_start_scsi_controller", 00:05:16.195 "vhost_create_scsi_controller", 00:05:16.195 "thread_set_cpumask", 00:05:16.195 "scheduler_set_options", 00:05:16.195 "framework_get_governor", 00:05:16.195 "framework_get_scheduler", 00:05:16.195 "framework_set_scheduler", 00:05:16.195 "framework_get_reactors", 00:05:16.195 "thread_get_io_channels", 00:05:16.195 "thread_get_pollers", 00:05:16.195 "thread_get_stats", 00:05:16.195 "framework_monitor_context_switch", 00:05:16.195 "spdk_kill_instance", 00:05:16.195 "log_enable_timestamps", 00:05:16.195 "log_get_flags", 00:05:16.195 "log_clear_flag", 00:05:16.195 "log_set_flag", 00:05:16.195 "log_get_level", 00:05:16.195 "log_set_level", 00:05:16.195 "log_get_print_level", 00:05:16.195 "log_set_print_level", 00:05:16.195 "framework_enable_cpumask_locks", 00:05:16.195 "framework_disable_cpumask_locks", 00:05:16.195 "framework_wait_init", 00:05:16.195 "framework_start_init", 00:05:16.195 "scsi_get_devices", 00:05:16.195 "bdev_get_histogram", 00:05:16.195 "bdev_enable_histogram", 00:05:16.195 "bdev_set_qos_limit", 00:05:16.195 "bdev_set_qd_sampling_period", 00:05:16.195 "bdev_get_bdevs", 00:05:16.195 "bdev_reset_iostat", 00:05:16.195 "bdev_get_iostat", 00:05:16.195 "bdev_examine", 00:05:16.195 "bdev_wait_for_examine", 00:05:16.195 "bdev_set_options", 00:05:16.195 "accel_get_stats", 00:05:16.195 "accel_set_options", 00:05:16.195 "accel_set_driver", 00:05:16.195 "accel_crypto_key_destroy", 00:05:16.195 "accel_crypto_keys_get", 00:05:16.195 "accel_crypto_key_create", 00:05:16.195 "accel_assign_opc", 00:05:16.195 "accel_get_module_info", 00:05:16.195 "accel_get_opc_assignments", 00:05:16.195 "vmd_rescan", 00:05:16.195 "vmd_remove_device", 00:05:16.195 "vmd_enable", 00:05:16.195 "sock_get_default_impl", 00:05:16.195 "sock_set_default_impl", 00:05:16.195 "sock_impl_set_options", 00:05:16.195 "sock_impl_get_options", 00:05:16.195 "iobuf_get_stats", 00:05:16.195 "iobuf_set_options", 00:05:16.195 "keyring_get_keys", 00:05:16.195 "framework_get_pci_devices", 00:05:16.195 "framework_get_config", 00:05:16.195 "framework_get_subsystems", 00:05:16.195 "fsdev_set_opts", 00:05:16.195 "fsdev_get_opts", 00:05:16.195 "trace_get_info", 00:05:16.195 "trace_get_tpoint_group_mask", 00:05:16.195 "trace_disable_tpoint_group", 00:05:16.195 "trace_enable_tpoint_group", 00:05:16.195 "trace_clear_tpoint_mask", 00:05:16.195 "trace_set_tpoint_mask", 00:05:16.195 "notify_get_notifications", 00:05:16.195 "notify_get_types", 00:05:16.195 "spdk_get_version", 00:05:16.195 "rpc_get_methods" 00:05:16.195 ] 00:05:16.195 18:36:16 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:16.195 18:36:16 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:16.195 18:36:16 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:16.195 18:36:16 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:16.195 18:36:16 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 71767 00:05:16.195 18:36:16 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 71767 ']' 00:05:16.195 18:36:16 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 71767 00:05:16.195 18:36:16 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:05:16.195 18:36:16 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:16.195 18:36:16 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71767 00:05:16.195 18:36:16 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:16.195 18:36:16 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:16.195 18:36:16 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71767' 00:05:16.195 killing process with pid 71767 00:05:16.195 18:36:16 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 71767 00:05:16.195 18:36:16 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 71767 00:05:16.762 00:05:16.762 real 0m1.828s 00:05:16.762 user 0m3.049s 00:05:16.762 sys 0m0.579s 00:05:16.762 18:36:16 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:16.762 18:36:16 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:16.762 ************************************ 00:05:16.762 END TEST spdkcli_tcp 00:05:16.762 ************************************ 00:05:16.762 18:36:16 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:16.762 18:36:16 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:16.762 18:36:16 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:16.762 18:36:16 -- common/autotest_common.sh@10 -- # set +x 00:05:16.762 ************************************ 00:05:16.762 START TEST dpdk_mem_utility 00:05:16.762 ************************************ 00:05:16.762 18:36:16 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:16.762 * Looking for test storage... 00:05:16.762 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:16.762 18:36:17 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:16.762 18:36:17 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:05:16.762 18:36:17 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:16.762 18:36:17 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:16.762 18:36:17 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:16.762 18:36:17 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:16.762 18:36:17 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:16.762 18:36:17 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:16.762 18:36:17 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:16.762 18:36:17 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:16.762 18:36:17 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:16.762 18:36:17 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:16.762 18:36:17 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:16.762 18:36:17 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:16.762 18:36:17 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:16.762 18:36:17 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:16.762 18:36:17 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:16.762 18:36:17 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:16.762 18:36:17 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:16.762 18:36:17 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:16.762 18:36:17 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:16.762 18:36:17 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:16.762 18:36:17 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:16.762 18:36:17 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:16.762 18:36:17 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:16.762 18:36:17 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:16.762 18:36:17 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:16.762 18:36:17 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:16.762 18:36:17 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:16.762 18:36:17 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:16.762 18:36:17 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:16.762 18:36:17 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:16.762 18:36:17 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:16.762 18:36:17 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:16.762 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.762 --rc genhtml_branch_coverage=1 00:05:16.762 --rc genhtml_function_coverage=1 00:05:16.762 --rc genhtml_legend=1 00:05:16.762 --rc geninfo_all_blocks=1 00:05:16.762 --rc geninfo_unexecuted_blocks=1 00:05:16.762 00:05:16.762 ' 00:05:16.762 18:36:17 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:16.762 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.762 --rc genhtml_branch_coverage=1 00:05:16.762 --rc genhtml_function_coverage=1 00:05:16.762 --rc genhtml_legend=1 00:05:16.762 --rc geninfo_all_blocks=1 00:05:16.762 --rc geninfo_unexecuted_blocks=1 00:05:16.762 00:05:16.762 ' 00:05:16.762 18:36:17 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:16.762 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.762 --rc genhtml_branch_coverage=1 00:05:16.762 --rc genhtml_function_coverage=1 00:05:16.762 --rc genhtml_legend=1 00:05:16.763 --rc geninfo_all_blocks=1 00:05:16.763 --rc geninfo_unexecuted_blocks=1 00:05:16.763 00:05:16.763 ' 00:05:16.763 18:36:17 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:16.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.763 --rc genhtml_branch_coverage=1 00:05:16.763 --rc genhtml_function_coverage=1 00:05:16.763 --rc genhtml_legend=1 00:05:16.763 --rc geninfo_all_blocks=1 00:05:16.763 --rc geninfo_unexecuted_blocks=1 00:05:16.763 00:05:16.763 ' 00:05:16.763 18:36:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:16.763 18:36:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=71856 00:05:16.763 18:36:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:16.763 18:36:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 71856 00:05:16.763 18:36:17 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 71856 ']' 00:05:16.763 18:36:17 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:16.763 18:36:17 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:16.763 18:36:17 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:16.763 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:16.763 18:36:17 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:16.763 18:36:17 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:17.022 [2024-12-15 18:36:17.283172] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:05:17.022 [2024-12-15 18:36:17.283402] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71856 ] 00:05:17.022 [2024-12-15 18:36:17.450795] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:17.280 [2024-12-15 18:36:17.480230] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.849 18:36:18 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:17.849 18:36:18 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:05:17.849 18:36:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:17.849 18:36:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:17.849 18:36:18 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:17.849 18:36:18 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:17.849 { 00:05:17.849 "filename": "/tmp/spdk_mem_dump.txt" 00:05:17.849 } 00:05:17.849 18:36:18 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:17.849 18:36:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:17.849 DPDK memory size 818.000000 MiB in 1 heap(s) 00:05:17.849 1 heaps totaling size 818.000000 MiB 00:05:17.849 size: 818.000000 MiB heap id: 0 00:05:17.849 end heaps---------- 00:05:17.849 9 mempools totaling size 603.782043 MiB 00:05:17.849 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:17.849 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:17.849 size: 100.555481 MiB name: bdev_io_71856 00:05:17.849 size: 50.003479 MiB name: msgpool_71856 00:05:17.849 size: 36.509338 MiB name: fsdev_io_71856 00:05:17.849 size: 21.763794 MiB name: PDU_Pool 00:05:17.849 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:17.849 size: 4.133484 MiB name: evtpool_71856 00:05:17.849 size: 0.026123 MiB name: Session_Pool 00:05:17.849 end mempools------- 00:05:17.849 6 memzones totaling size 4.142822 MiB 00:05:17.849 size: 1.000366 MiB name: RG_ring_0_71856 00:05:17.849 size: 1.000366 MiB name: RG_ring_1_71856 00:05:17.849 size: 1.000366 MiB name: RG_ring_4_71856 00:05:17.849 size: 1.000366 MiB name: RG_ring_5_71856 00:05:17.849 size: 0.125366 MiB name: RG_ring_2_71856 00:05:17.849 size: 0.015991 MiB name: RG_ring_3_71856 00:05:17.849 end memzones------- 00:05:17.849 18:36:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:17.849 heap id: 0 total size: 818.000000 MiB number of busy elements: 310 number of free elements: 15 00:05:17.849 list of free elements. size: 10.803772 MiB 00:05:17.849 element at address: 0x200019200000 with size: 0.999878 MiB 00:05:17.849 element at address: 0x200019400000 with size: 0.999878 MiB 00:05:17.849 element at address: 0x200032000000 with size: 0.994446 MiB 00:05:17.849 element at address: 0x200000400000 with size: 0.993958 MiB 00:05:17.849 element at address: 0x200006400000 with size: 0.959839 MiB 00:05:17.849 element at address: 0x200012c00000 with size: 0.944275 MiB 00:05:17.849 element at address: 0x200019600000 with size: 0.936584 MiB 00:05:17.849 element at address: 0x200000200000 with size: 0.717346 MiB 00:05:17.849 element at address: 0x20001ae00000 with size: 0.568970 MiB 00:05:17.849 element at address: 0x20000a600000 with size: 0.488892 MiB 00:05:17.849 element at address: 0x200000c00000 with size: 0.486267 MiB 00:05:17.849 element at address: 0x200019800000 with size: 0.485657 MiB 00:05:17.849 element at address: 0x200003e00000 with size: 0.480286 MiB 00:05:17.849 element at address: 0x200028200000 with size: 0.395752 MiB 00:05:17.849 element at address: 0x200000800000 with size: 0.351746 MiB 00:05:17.849 list of standard malloc elements. size: 199.267334 MiB 00:05:17.849 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:05:17.849 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:05:17.849 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:17.849 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:05:17.849 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:05:17.849 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:17.849 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:05:17.849 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:17.849 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:05:17.849 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:17.849 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:17.849 element at address: 0x2000004fe740 with size: 0.000183 MiB 00:05:17.849 element at address: 0x2000004fe800 with size: 0.000183 MiB 00:05:17.849 element at address: 0x2000004fe8c0 with size: 0.000183 MiB 00:05:17.849 element at address: 0x2000004fe980 with size: 0.000183 MiB 00:05:17.849 element at address: 0x2000004fea40 with size: 0.000183 MiB 00:05:17.849 element at address: 0x2000004feb00 with size: 0.000183 MiB 00:05:17.849 element at address: 0x2000004febc0 with size: 0.000183 MiB 00:05:17.849 element at address: 0x2000004fec80 with size: 0.000183 MiB 00:05:17.849 element at address: 0x2000004fed40 with size: 0.000183 MiB 00:05:17.849 element at address: 0x2000004fee00 with size: 0.000183 MiB 00:05:17.849 element at address: 0x2000004feec0 with size: 0.000183 MiB 00:05:17.849 element at address: 0x2000004fef80 with size: 0.000183 MiB 00:05:17.849 element at address: 0x2000004ff040 with size: 0.000183 MiB 00:05:17.849 element at address: 0x2000004ff100 with size: 0.000183 MiB 00:05:17.849 element at address: 0x2000004ff1c0 with size: 0.000183 MiB 00:05:17.849 element at address: 0x2000004ff280 with size: 0.000183 MiB 00:05:17.849 element at address: 0x2000004ff340 with size: 0.000183 MiB 00:05:17.849 element at address: 0x2000004ff400 with size: 0.000183 MiB 00:05:17.849 element at address: 0x2000004ff4c0 with size: 0.000183 MiB 00:05:17.849 element at address: 0x2000004ff580 with size: 0.000183 MiB 00:05:17.849 element at address: 0x2000004ff640 with size: 0.000183 MiB 00:05:17.849 element at address: 0x2000004ff700 with size: 0.000183 MiB 00:05:17.849 element at address: 0x2000004ff7c0 with size: 0.000183 MiB 00:05:17.849 element at address: 0x2000004ff880 with size: 0.000183 MiB 00:05:17.849 element at address: 0x2000004ff940 with size: 0.000183 MiB 00:05:17.849 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:05:17.849 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:05:17.849 element at address: 0x2000004ffcc0 with size: 0.000183 MiB 00:05:17.849 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:05:17.849 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:05:17.849 element at address: 0x20000085a0c0 with size: 0.000183 MiB 00:05:17.849 element at address: 0x20000085a2c0 with size: 0.000183 MiB 00:05:17.849 element at address: 0x20000085e580 with size: 0.000183 MiB 00:05:17.849 element at address: 0x20000087e840 with size: 0.000183 MiB 00:05:17.849 element at address: 0x20000087e900 with size: 0.000183 MiB 00:05:17.849 element at address: 0x20000087e9c0 with size: 0.000183 MiB 00:05:17.849 element at address: 0x20000087ea80 with size: 0.000183 MiB 00:05:17.849 element at address: 0x20000087eb40 with size: 0.000183 MiB 00:05:17.849 element at address: 0x20000087ec00 with size: 0.000183 MiB 00:05:17.849 element at address: 0x20000087ecc0 with size: 0.000183 MiB 00:05:17.849 element at address: 0x20000087ed80 with size: 0.000183 MiB 00:05:17.849 element at address: 0x20000087ee40 with size: 0.000183 MiB 00:05:17.849 element at address: 0x20000087ef00 with size: 0.000183 MiB 00:05:17.849 element at address: 0x20000087efc0 with size: 0.000183 MiB 00:05:17.849 element at address: 0x20000087f080 with size: 0.000183 MiB 00:05:17.849 element at address: 0x20000087f140 with size: 0.000183 MiB 00:05:17.849 element at address: 0x20000087f200 with size: 0.000183 MiB 00:05:17.849 element at address: 0x20000087f2c0 with size: 0.000183 MiB 00:05:17.849 element at address: 0x20000087f380 with size: 0.000183 MiB 00:05:17.849 element at address: 0x20000087f440 with size: 0.000183 MiB 00:05:17.849 element at address: 0x20000087f500 with size: 0.000183 MiB 00:05:17.849 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:05:17.849 element at address: 0x20000087f680 with size: 0.000183 MiB 00:05:17.849 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:05:17.849 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:05:17.849 element at address: 0x200000c7c7c0 with size: 0.000183 MiB 00:05:17.849 element at address: 0x200000c7c880 with size: 0.000183 MiB 00:05:17.849 element at address: 0x200000c7c940 with size: 0.000183 MiB 00:05:17.850 element at address: 0x200000c7ca00 with size: 0.000183 MiB 00:05:17.850 element at address: 0x200000c7cac0 with size: 0.000183 MiB 00:05:17.850 element at address: 0x200000c7cb80 with size: 0.000183 MiB 00:05:17.850 element at address: 0x200000c7cc40 with size: 0.000183 MiB 00:05:17.850 element at address: 0x200000c7cd00 with size: 0.000183 MiB 00:05:17.850 element at address: 0x200000c7cdc0 with size: 0.000183 MiB 00:05:17.850 element at address: 0x200000c7ce80 with size: 0.000183 MiB 00:05:17.850 element at address: 0x200000c7cf40 with size: 0.000183 MiB 00:05:17.850 element at address: 0x200000c7d000 with size: 0.000183 MiB 00:05:17.850 element at address: 0x200000c7d0c0 with size: 0.000183 MiB 00:05:17.850 element at address: 0x200000c7d180 with size: 0.000183 MiB 00:05:17.850 element at address: 0x200000c7d240 with size: 0.000183 MiB 00:05:17.850 element at address: 0x200000c7d300 with size: 0.000183 MiB 00:05:17.850 element at address: 0x200000c7d3c0 with size: 0.000183 MiB 00:05:17.850 element at address: 0x200000c7d480 with size: 0.000183 MiB 00:05:17.850 element at address: 0x200000c7d540 with size: 0.000183 MiB 00:05:17.850 element at address: 0x200000c7d600 with size: 0.000183 MiB 00:05:17.850 element at address: 0x200000c7d6c0 with size: 0.000183 MiB 00:05:17.850 element at address: 0x200000c7d780 with size: 0.000183 MiB 00:05:17.850 element at address: 0x200000c7d840 with size: 0.000183 MiB 00:05:17.850 element at address: 0x200000c7d900 with size: 0.000183 MiB 00:05:17.850 element at address: 0x200000c7d9c0 with size: 0.000183 MiB 00:05:17.850 element at address: 0x200000c7da80 with size: 0.000183 MiB 00:05:17.850 element at address: 0x200000c7db40 with size: 0.000183 MiB 00:05:17.850 element at address: 0x200000c7dc00 with size: 0.000183 MiB 00:05:17.850 element at address: 0x200000c7dcc0 with size: 0.000183 MiB 00:05:17.850 element at address: 0x200000c7dd80 with size: 0.000183 MiB 00:05:17.850 element at address: 0x200000c7de40 with size: 0.000183 MiB 00:05:17.850 element at address: 0x200000c7df00 with size: 0.000183 MiB 00:05:17.850 element at address: 0x200000c7dfc0 with size: 0.000183 MiB 00:05:17.850 element at address: 0x200000c7e080 with size: 0.000183 MiB 00:05:17.850 element at address: 0x200000c7e140 with size: 0.000183 MiB 00:05:17.850 element at address: 0x200000c7e200 with size: 0.000183 MiB 00:05:17.850 element at address: 0x200000c7e2c0 with size: 0.000183 MiB 00:05:17.850 element at address: 0x200000c7e380 with size: 0.000183 MiB 00:05:17.850 element at address: 0x200000c7e440 with size: 0.000183 MiB 00:05:17.850 element at address: 0x200000c7e500 with size: 0.000183 MiB 00:05:17.850 element at address: 0x200000c7e5c0 with size: 0.000183 MiB 00:05:17.850 element at address: 0x200000c7e680 with size: 0.000183 MiB 00:05:17.850 element at address: 0x200000c7e740 with size: 0.000183 MiB 00:05:17.850 element at address: 0x200000c7e800 with size: 0.000183 MiB 00:05:17.850 element at address: 0x200000c7e8c0 with size: 0.000183 MiB 00:05:17.850 element at address: 0x200000c7e980 with size: 0.000183 MiB 00:05:17.850 element at address: 0x200000c7ea40 with size: 0.000183 MiB 00:05:17.850 element at address: 0x200000c7eb00 with size: 0.000183 MiB 00:05:17.850 element at address: 0x200000c7ebc0 with size: 0.000183 MiB 00:05:17.850 element at address: 0x200000c7ec80 with size: 0.000183 MiB 00:05:17.850 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:05:17.850 element at address: 0x200000cff000 with size: 0.000183 MiB 00:05:17.850 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:05:17.850 element at address: 0x200003e7af40 with size: 0.000183 MiB 00:05:17.850 element at address: 0x200003e7b000 with size: 0.000183 MiB 00:05:17.850 element at address: 0x200003e7b0c0 with size: 0.000183 MiB 00:05:17.850 element at address: 0x200003e7b180 with size: 0.000183 MiB 00:05:17.850 element at address: 0x200003e7b240 with size: 0.000183 MiB 00:05:17.850 element at address: 0x200003e7b300 with size: 0.000183 MiB 00:05:17.850 element at address: 0x200003e7b3c0 with size: 0.000183 MiB 00:05:17.850 element at address: 0x200003e7b480 with size: 0.000183 MiB 00:05:17.850 element at address: 0x200003e7b540 with size: 0.000183 MiB 00:05:17.850 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:05:17.850 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:05:17.850 element at address: 0x200003efb980 with size: 0.000183 MiB 00:05:17.850 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:05:17.850 element at address: 0x20000a67d280 with size: 0.000183 MiB 00:05:17.850 element at address: 0x20000a67d340 with size: 0.000183 MiB 00:05:17.850 element at address: 0x20000a67d400 with size: 0.000183 MiB 00:05:17.850 element at address: 0x20000a67d4c0 with size: 0.000183 MiB 00:05:17.850 element at address: 0x20000a67d580 with size: 0.000183 MiB 00:05:17.850 element at address: 0x20000a67d640 with size: 0.000183 MiB 00:05:17.850 element at address: 0x20000a67d700 with size: 0.000183 MiB 00:05:17.850 element at address: 0x20000a67d7c0 with size: 0.000183 MiB 00:05:17.850 element at address: 0x20000a67d880 with size: 0.000183 MiB 00:05:17.850 element at address: 0x20000a67d940 with size: 0.000183 MiB 00:05:17.850 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:05:17.850 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:05:17.850 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:05:17.850 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:05:17.850 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:05:17.850 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:05:17.850 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:05:17.850 element at address: 0x20001ae91a80 with size: 0.000183 MiB 00:05:17.850 element at address: 0x20001ae91b40 with size: 0.000183 MiB 00:05:17.850 element at address: 0x20001ae91c00 with size: 0.000183 MiB 00:05:17.850 element at address: 0x20001ae91cc0 with size: 0.000183 MiB 00:05:17.850 element at address: 0x20001ae91d80 with size: 0.000183 MiB 00:05:17.850 element at address: 0x20001ae91e40 with size: 0.000183 MiB 00:05:17.850 element at address: 0x20001ae91f00 with size: 0.000183 MiB 00:05:17.850 element at address: 0x20001ae91fc0 with size: 0.000183 MiB 00:05:17.850 element at address: 0x20001ae92080 with size: 0.000183 MiB 00:05:17.850 element at address: 0x20001ae92140 with size: 0.000183 MiB 00:05:17.850 element at address: 0x20001ae92200 with size: 0.000183 MiB 00:05:17.850 element at address: 0x20001ae922c0 with size: 0.000183 MiB 00:05:17.850 element at address: 0x20001ae92380 with size: 0.000183 MiB 00:05:17.850 element at address: 0x20001ae92440 with size: 0.000183 MiB 00:05:17.850 element at address: 0x20001ae92500 with size: 0.000183 MiB 00:05:17.850 element at address: 0x20001ae925c0 with size: 0.000183 MiB 00:05:17.850 element at address: 0x20001ae92680 with size: 0.000183 MiB 00:05:17.850 element at address: 0x20001ae92740 with size: 0.000183 MiB 00:05:17.850 element at address: 0x20001ae92800 with size: 0.000183 MiB 00:05:17.850 element at address: 0x20001ae928c0 with size: 0.000183 MiB 00:05:17.850 element at address: 0x20001ae92980 with size: 0.000183 MiB 00:05:17.850 element at address: 0x20001ae92a40 with size: 0.000183 MiB 00:05:17.850 element at address: 0x20001ae92b00 with size: 0.000183 MiB 00:05:17.850 element at address: 0x20001ae92bc0 with size: 0.000183 MiB 00:05:17.850 element at address: 0x20001ae92c80 with size: 0.000183 MiB 00:05:17.850 element at address: 0x20001ae92d40 with size: 0.000183 MiB 00:05:17.850 element at address: 0x20001ae92e00 with size: 0.000183 MiB 00:05:17.850 element at address: 0x20001ae92ec0 with size: 0.000183 MiB 00:05:17.850 element at address: 0x20001ae92f80 with size: 0.000183 MiB 00:05:17.850 element at address: 0x20001ae93040 with size: 0.000183 MiB 00:05:17.850 element at address: 0x20001ae93100 with size: 0.000183 MiB 00:05:17.850 element at address: 0x20001ae931c0 with size: 0.000183 MiB 00:05:17.850 element at address: 0x20001ae93280 with size: 0.000183 MiB 00:05:17.850 element at address: 0x20001ae93340 with size: 0.000183 MiB 00:05:17.850 element at address: 0x20001ae93400 with size: 0.000183 MiB 00:05:17.850 element at address: 0x20001ae934c0 with size: 0.000183 MiB 00:05:17.850 element at address: 0x20001ae93580 with size: 0.000183 MiB 00:05:17.850 element at address: 0x20001ae93640 with size: 0.000183 MiB 00:05:17.850 element at address: 0x20001ae93700 with size: 0.000183 MiB 00:05:17.850 element at address: 0x20001ae937c0 with size: 0.000183 MiB 00:05:17.850 element at address: 0x20001ae93880 with size: 0.000183 MiB 00:05:17.850 element at address: 0x20001ae93940 with size: 0.000183 MiB 00:05:17.850 element at address: 0x20001ae93a00 with size: 0.000183 MiB 00:05:17.850 element at address: 0x20001ae93ac0 with size: 0.000183 MiB 00:05:17.850 element at address: 0x20001ae93b80 with size: 0.000183 MiB 00:05:17.850 element at address: 0x20001ae93c40 with size: 0.000183 MiB 00:05:17.850 element at address: 0x20001ae93d00 with size: 0.000183 MiB 00:05:17.850 element at address: 0x20001ae93dc0 with size: 0.000183 MiB 00:05:17.850 element at address: 0x20001ae93e80 with size: 0.000183 MiB 00:05:17.850 element at address: 0x20001ae93f40 with size: 0.000183 MiB 00:05:17.850 element at address: 0x20001ae94000 with size: 0.000183 MiB 00:05:17.850 element at address: 0x20001ae940c0 with size: 0.000183 MiB 00:05:17.850 element at address: 0x20001ae94180 with size: 0.000183 MiB 00:05:17.850 element at address: 0x20001ae94240 with size: 0.000183 MiB 00:05:17.850 element at address: 0x20001ae94300 with size: 0.000183 MiB 00:05:17.850 element at address: 0x20001ae943c0 with size: 0.000183 MiB 00:05:17.850 element at address: 0x20001ae94480 with size: 0.000183 MiB 00:05:17.850 element at address: 0x20001ae94540 with size: 0.000183 MiB 00:05:17.850 element at address: 0x20001ae94600 with size: 0.000183 MiB 00:05:17.850 element at address: 0x20001ae946c0 with size: 0.000183 MiB 00:05:17.850 element at address: 0x20001ae94780 with size: 0.000183 MiB 00:05:17.850 element at address: 0x20001ae94840 with size: 0.000183 MiB 00:05:17.850 element at address: 0x20001ae94900 with size: 0.000183 MiB 00:05:17.850 element at address: 0x20001ae949c0 with size: 0.000183 MiB 00:05:17.850 element at address: 0x20001ae94a80 with size: 0.000183 MiB 00:05:17.850 element at address: 0x20001ae94b40 with size: 0.000183 MiB 00:05:17.850 element at address: 0x20001ae94c00 with size: 0.000183 MiB 00:05:17.850 element at address: 0x20001ae94cc0 with size: 0.000183 MiB 00:05:17.850 element at address: 0x20001ae94d80 with size: 0.000183 MiB 00:05:17.850 element at address: 0x20001ae94e40 with size: 0.000183 MiB 00:05:17.850 element at address: 0x20001ae94f00 with size: 0.000183 MiB 00:05:17.850 element at address: 0x20001ae94fc0 with size: 0.000183 MiB 00:05:17.850 element at address: 0x20001ae95080 with size: 0.000183 MiB 00:05:17.850 element at address: 0x20001ae95140 with size: 0.000183 MiB 00:05:17.850 element at address: 0x20001ae95200 with size: 0.000183 MiB 00:05:17.850 element at address: 0x20001ae952c0 with size: 0.000183 MiB 00:05:17.850 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:05:17.850 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:05:17.850 element at address: 0x200028265500 with size: 0.000183 MiB 00:05:17.850 element at address: 0x2000282655c0 with size: 0.000183 MiB 00:05:17.851 element at address: 0x20002826c1c0 with size: 0.000183 MiB 00:05:17.851 element at address: 0x20002826c3c0 with size: 0.000183 MiB 00:05:17.851 element at address: 0x20002826c480 with size: 0.000183 MiB 00:05:17.851 element at address: 0x20002826c540 with size: 0.000183 MiB 00:05:17.851 element at address: 0x20002826c600 with size: 0.000183 MiB 00:05:17.851 element at address: 0x20002826c6c0 with size: 0.000183 MiB 00:05:17.851 element at address: 0x20002826c780 with size: 0.000183 MiB 00:05:17.851 element at address: 0x20002826c840 with size: 0.000183 MiB 00:05:17.851 element at address: 0x20002826c900 with size: 0.000183 MiB 00:05:17.851 element at address: 0x20002826c9c0 with size: 0.000183 MiB 00:05:17.851 element at address: 0x20002826ca80 with size: 0.000183 MiB 00:05:17.851 element at address: 0x20002826cb40 with size: 0.000183 MiB 00:05:17.851 element at address: 0x20002826cc00 with size: 0.000183 MiB 00:05:17.851 element at address: 0x20002826ccc0 with size: 0.000183 MiB 00:05:17.851 element at address: 0x20002826cd80 with size: 0.000183 MiB 00:05:17.851 element at address: 0x20002826ce40 with size: 0.000183 MiB 00:05:17.851 element at address: 0x20002826cf00 with size: 0.000183 MiB 00:05:17.851 element at address: 0x20002826cfc0 with size: 0.000183 MiB 00:05:17.851 element at address: 0x20002826d080 with size: 0.000183 MiB 00:05:17.851 element at address: 0x20002826d140 with size: 0.000183 MiB 00:05:17.851 element at address: 0x20002826d200 with size: 0.000183 MiB 00:05:17.851 element at address: 0x20002826d2c0 with size: 0.000183 MiB 00:05:17.851 element at address: 0x20002826d380 with size: 0.000183 MiB 00:05:17.851 element at address: 0x20002826d440 with size: 0.000183 MiB 00:05:17.851 element at address: 0x20002826d500 with size: 0.000183 MiB 00:05:17.851 element at address: 0x20002826d5c0 with size: 0.000183 MiB 00:05:17.851 element at address: 0x20002826d680 with size: 0.000183 MiB 00:05:17.851 element at address: 0x20002826d740 with size: 0.000183 MiB 00:05:17.851 element at address: 0x20002826d800 with size: 0.000183 MiB 00:05:17.851 element at address: 0x20002826d8c0 with size: 0.000183 MiB 00:05:17.851 element at address: 0x20002826d980 with size: 0.000183 MiB 00:05:17.851 element at address: 0x20002826da40 with size: 0.000183 MiB 00:05:17.851 element at address: 0x20002826db00 with size: 0.000183 MiB 00:05:17.851 element at address: 0x20002826dbc0 with size: 0.000183 MiB 00:05:17.851 element at address: 0x20002826dc80 with size: 0.000183 MiB 00:05:17.851 element at address: 0x20002826dd40 with size: 0.000183 MiB 00:05:17.851 element at address: 0x20002826de00 with size: 0.000183 MiB 00:05:17.851 element at address: 0x20002826dec0 with size: 0.000183 MiB 00:05:17.851 element at address: 0x20002826df80 with size: 0.000183 MiB 00:05:17.851 element at address: 0x20002826e040 with size: 0.000183 MiB 00:05:17.851 element at address: 0x20002826e100 with size: 0.000183 MiB 00:05:17.851 element at address: 0x20002826e1c0 with size: 0.000183 MiB 00:05:17.851 element at address: 0x20002826e280 with size: 0.000183 MiB 00:05:17.851 element at address: 0x20002826e340 with size: 0.000183 MiB 00:05:17.851 element at address: 0x20002826e400 with size: 0.000183 MiB 00:05:17.851 element at address: 0x20002826e4c0 with size: 0.000183 MiB 00:05:17.851 element at address: 0x20002826e580 with size: 0.000183 MiB 00:05:17.851 element at address: 0x20002826e640 with size: 0.000183 MiB 00:05:17.851 element at address: 0x20002826e700 with size: 0.000183 MiB 00:05:17.851 element at address: 0x20002826e7c0 with size: 0.000183 MiB 00:05:17.851 element at address: 0x20002826e880 with size: 0.000183 MiB 00:05:17.851 element at address: 0x20002826e940 with size: 0.000183 MiB 00:05:17.851 element at address: 0x20002826ea00 with size: 0.000183 MiB 00:05:17.851 element at address: 0x20002826eac0 with size: 0.000183 MiB 00:05:17.851 element at address: 0x20002826eb80 with size: 0.000183 MiB 00:05:17.851 element at address: 0x20002826ec40 with size: 0.000183 MiB 00:05:17.851 element at address: 0x20002826ed00 with size: 0.000183 MiB 00:05:17.851 element at address: 0x20002826edc0 with size: 0.000183 MiB 00:05:17.851 element at address: 0x20002826ee80 with size: 0.000183 MiB 00:05:17.851 element at address: 0x20002826ef40 with size: 0.000183 MiB 00:05:17.851 element at address: 0x20002826f000 with size: 0.000183 MiB 00:05:17.851 element at address: 0x20002826f0c0 with size: 0.000183 MiB 00:05:17.851 element at address: 0x20002826f180 with size: 0.000183 MiB 00:05:17.851 element at address: 0x20002826f240 with size: 0.000183 MiB 00:05:17.851 element at address: 0x20002826f300 with size: 0.000183 MiB 00:05:17.851 element at address: 0x20002826f3c0 with size: 0.000183 MiB 00:05:17.851 element at address: 0x20002826f480 with size: 0.000183 MiB 00:05:17.851 element at address: 0x20002826f540 with size: 0.000183 MiB 00:05:17.851 element at address: 0x20002826f600 with size: 0.000183 MiB 00:05:17.851 element at address: 0x20002826f6c0 with size: 0.000183 MiB 00:05:17.851 element at address: 0x20002826f780 with size: 0.000183 MiB 00:05:17.851 element at address: 0x20002826f840 with size: 0.000183 MiB 00:05:17.851 element at address: 0x20002826f900 with size: 0.000183 MiB 00:05:17.851 element at address: 0x20002826f9c0 with size: 0.000183 MiB 00:05:17.851 element at address: 0x20002826fa80 with size: 0.000183 MiB 00:05:17.851 element at address: 0x20002826fb40 with size: 0.000183 MiB 00:05:17.851 element at address: 0x20002826fc00 with size: 0.000183 MiB 00:05:17.851 element at address: 0x20002826fcc0 with size: 0.000183 MiB 00:05:17.851 element at address: 0x20002826fd80 with size: 0.000183 MiB 00:05:17.851 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:05:17.851 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:05:17.851 list of memzone associated elements. size: 607.928894 MiB 00:05:17.851 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:05:17.851 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:17.851 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:05:17.851 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:17.851 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:05:17.851 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_71856_0 00:05:17.851 element at address: 0x200000dff380 with size: 48.003052 MiB 00:05:17.851 associated memzone info: size: 48.002930 MiB name: MP_msgpool_71856_0 00:05:17.851 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:05:17.851 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_71856_0 00:05:17.851 element at address: 0x2000199be940 with size: 20.255554 MiB 00:05:17.851 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:17.851 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:05:17.851 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:17.851 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:05:17.851 associated memzone info: size: 3.000122 MiB name: MP_evtpool_71856_0 00:05:17.851 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:05:17.851 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_71856 00:05:17.851 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:17.851 associated memzone info: size: 1.007996 MiB name: MP_evtpool_71856 00:05:17.851 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:05:17.851 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:17.851 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:05:17.851 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:17.851 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:05:17.851 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:17.851 element at address: 0x200003efba40 with size: 1.008118 MiB 00:05:17.851 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:17.851 element at address: 0x200000cff180 with size: 1.000488 MiB 00:05:17.851 associated memzone info: size: 1.000366 MiB name: RG_ring_0_71856 00:05:17.851 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:05:17.851 associated memzone info: size: 1.000366 MiB name: RG_ring_1_71856 00:05:17.851 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:05:17.851 associated memzone info: size: 1.000366 MiB name: RG_ring_4_71856 00:05:17.851 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:05:17.851 associated memzone info: size: 1.000366 MiB name: RG_ring_5_71856 00:05:17.851 element at address: 0x20000087f740 with size: 0.500488 MiB 00:05:17.851 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_71856 00:05:17.851 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:05:17.851 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_71856 00:05:17.851 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:05:17.851 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:17.851 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:05:17.851 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:17.851 element at address: 0x20001987c540 with size: 0.250488 MiB 00:05:17.851 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:17.851 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:05:17.851 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_71856 00:05:17.851 element at address: 0x20000085e640 with size: 0.125488 MiB 00:05:17.851 associated memzone info: size: 0.125366 MiB name: RG_ring_2_71856 00:05:17.851 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:05:17.851 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:17.851 element at address: 0x200028265680 with size: 0.023743 MiB 00:05:17.851 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:17.851 element at address: 0x20000085a380 with size: 0.016113 MiB 00:05:17.851 associated memzone info: size: 0.015991 MiB name: RG_ring_3_71856 00:05:17.851 element at address: 0x20002826b7c0 with size: 0.002441 MiB 00:05:17.851 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:17.851 element at address: 0x2000004ffb80 with size: 0.000305 MiB 00:05:17.851 associated memzone info: size: 0.000183 MiB name: MP_msgpool_71856 00:05:17.851 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:05:17.851 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_71856 00:05:17.851 element at address: 0x20000085a180 with size: 0.000305 MiB 00:05:17.851 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_71856 00:05:17.851 element at address: 0x20002826c280 with size: 0.000305 MiB 00:05:17.851 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:17.851 18:36:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:17.851 18:36:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 71856 00:05:17.851 18:36:18 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 71856 ']' 00:05:17.851 18:36:18 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 71856 00:05:17.851 18:36:18 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:05:17.851 18:36:18 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:17.852 18:36:18 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71856 00:05:18.111 killing process with pid 71856 00:05:18.111 18:36:18 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:18.111 18:36:18 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:18.111 18:36:18 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71856' 00:05:18.111 18:36:18 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 71856 00:05:18.111 18:36:18 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 71856 00:05:18.371 00:05:18.371 real 0m1.714s 00:05:18.371 user 0m1.695s 00:05:18.371 sys 0m0.518s 00:05:18.371 18:36:18 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:18.371 ************************************ 00:05:18.371 END TEST dpdk_mem_utility 00:05:18.371 ************************************ 00:05:18.371 18:36:18 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:18.371 18:36:18 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:18.371 18:36:18 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:18.371 18:36:18 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:18.371 18:36:18 -- common/autotest_common.sh@10 -- # set +x 00:05:18.371 ************************************ 00:05:18.371 START TEST event 00:05:18.371 ************************************ 00:05:18.371 18:36:18 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:18.632 * Looking for test storage... 00:05:18.632 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:18.632 18:36:18 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:18.632 18:36:18 event -- common/autotest_common.sh@1711 -- # lcov --version 00:05:18.632 18:36:18 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:18.632 18:36:18 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:18.632 18:36:18 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:18.632 18:36:18 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:18.632 18:36:18 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:18.632 18:36:18 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:18.632 18:36:18 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:18.632 18:36:18 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:18.632 18:36:18 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:18.632 18:36:18 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:18.632 18:36:18 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:18.632 18:36:18 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:18.632 18:36:18 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:18.632 18:36:18 event -- scripts/common.sh@344 -- # case "$op" in 00:05:18.632 18:36:18 event -- scripts/common.sh@345 -- # : 1 00:05:18.632 18:36:18 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:18.632 18:36:18 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:18.632 18:36:18 event -- scripts/common.sh@365 -- # decimal 1 00:05:18.632 18:36:18 event -- scripts/common.sh@353 -- # local d=1 00:05:18.632 18:36:18 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:18.632 18:36:18 event -- scripts/common.sh@355 -- # echo 1 00:05:18.632 18:36:18 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:18.632 18:36:18 event -- scripts/common.sh@366 -- # decimal 2 00:05:18.632 18:36:18 event -- scripts/common.sh@353 -- # local d=2 00:05:18.632 18:36:18 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:18.632 18:36:18 event -- scripts/common.sh@355 -- # echo 2 00:05:18.632 18:36:18 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:18.632 18:36:18 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:18.632 18:36:18 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:18.632 18:36:18 event -- scripts/common.sh@368 -- # return 0 00:05:18.632 18:36:18 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:18.632 18:36:18 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:18.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.632 --rc genhtml_branch_coverage=1 00:05:18.632 --rc genhtml_function_coverage=1 00:05:18.632 --rc genhtml_legend=1 00:05:18.632 --rc geninfo_all_blocks=1 00:05:18.632 --rc geninfo_unexecuted_blocks=1 00:05:18.632 00:05:18.632 ' 00:05:18.632 18:36:18 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:18.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.632 --rc genhtml_branch_coverage=1 00:05:18.632 --rc genhtml_function_coverage=1 00:05:18.632 --rc genhtml_legend=1 00:05:18.632 --rc geninfo_all_blocks=1 00:05:18.632 --rc geninfo_unexecuted_blocks=1 00:05:18.632 00:05:18.632 ' 00:05:18.632 18:36:18 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:18.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.632 --rc genhtml_branch_coverage=1 00:05:18.632 --rc genhtml_function_coverage=1 00:05:18.632 --rc genhtml_legend=1 00:05:18.632 --rc geninfo_all_blocks=1 00:05:18.632 --rc geninfo_unexecuted_blocks=1 00:05:18.632 00:05:18.632 ' 00:05:18.632 18:36:18 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:18.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.632 --rc genhtml_branch_coverage=1 00:05:18.632 --rc genhtml_function_coverage=1 00:05:18.632 --rc genhtml_legend=1 00:05:18.632 --rc geninfo_all_blocks=1 00:05:18.632 --rc geninfo_unexecuted_blocks=1 00:05:18.632 00:05:18.632 ' 00:05:18.632 18:36:18 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:18.632 18:36:18 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:18.632 18:36:18 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:18.632 18:36:18 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:05:18.632 18:36:18 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:18.632 18:36:18 event -- common/autotest_common.sh@10 -- # set +x 00:05:18.632 ************************************ 00:05:18.632 START TEST event_perf 00:05:18.632 ************************************ 00:05:18.632 18:36:18 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:18.632 Running I/O for 1 seconds...[2024-12-15 18:36:19.032241] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:05:18.632 [2024-12-15 18:36:19.032408] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71942 ] 00:05:18.891 [2024-12-15 18:36:19.201858] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:18.891 [2024-12-15 18:36:19.240660] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:18.891 [2024-12-15 18:36:19.240927] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:05:18.891 [2024-12-15 18:36:19.240890] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.891 Running I/O for 1 seconds...[2024-12-15 18:36:19.241027] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:05:20.268 00:05:20.268 lcore 0: 206137 00:05:20.268 lcore 1: 206137 00:05:20.268 lcore 2: 206136 00:05:20.268 lcore 3: 206137 00:05:20.268 done. 00:05:20.268 00:05:20.268 real 0m1.323s 00:05:20.268 user 0m4.091s 00:05:20.268 sys 0m0.114s 00:05:20.268 ************************************ 00:05:20.268 END TEST event_perf 00:05:20.268 ************************************ 00:05:20.268 18:36:20 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:20.268 18:36:20 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:20.268 18:36:20 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:20.268 18:36:20 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:20.268 18:36:20 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:20.268 18:36:20 event -- common/autotest_common.sh@10 -- # set +x 00:05:20.268 ************************************ 00:05:20.268 START TEST event_reactor 00:05:20.268 ************************************ 00:05:20.268 18:36:20 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:20.268 [2024-12-15 18:36:20.429279] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:05:20.268 [2024-12-15 18:36:20.429456] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71976 ] 00:05:20.268 [2024-12-15 18:36:20.600276] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.268 [2024-12-15 18:36:20.625875] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.646 test_start 00:05:21.646 oneshot 00:05:21.646 tick 100 00:05:21.646 tick 100 00:05:21.646 tick 250 00:05:21.646 tick 100 00:05:21.646 tick 100 00:05:21.646 tick 100 00:05:21.646 tick 250 00:05:21.646 tick 500 00:05:21.646 tick 100 00:05:21.646 tick 100 00:05:21.646 tick 250 00:05:21.646 tick 100 00:05:21.646 tick 100 00:05:21.646 test_end 00:05:21.646 00:05:21.646 real 0m1.306s 00:05:21.646 user 0m1.115s 00:05:21.646 sys 0m0.084s 00:05:21.646 18:36:21 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:21.646 18:36:21 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:21.646 ************************************ 00:05:21.646 END TEST event_reactor 00:05:21.646 ************************************ 00:05:21.646 18:36:21 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:21.646 18:36:21 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:21.646 18:36:21 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:21.646 18:36:21 event -- common/autotest_common.sh@10 -- # set +x 00:05:21.646 ************************************ 00:05:21.646 START TEST event_reactor_perf 00:05:21.646 ************************************ 00:05:21.646 18:36:21 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:21.646 [2024-12-15 18:36:21.807398] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:05:21.646 [2024-12-15 18:36:21.807521] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72018 ] 00:05:21.646 [2024-12-15 18:36:21.977495] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.646 [2024-12-15 18:36:22.003144] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.025 test_start 00:05:23.025 test_end 00:05:23.025 Performance: 389333 events per second 00:05:23.025 00:05:23.025 real 0m1.307s 00:05:23.025 user 0m1.107s 00:05:23.025 sys 0m0.092s 00:05:23.025 18:36:23 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:23.025 ************************************ 00:05:23.025 END TEST event_reactor_perf 00:05:23.025 ************************************ 00:05:23.025 18:36:23 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:23.025 18:36:23 event -- event/event.sh@49 -- # uname -s 00:05:23.025 18:36:23 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:23.025 18:36:23 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:23.025 18:36:23 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:23.025 18:36:23 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:23.025 18:36:23 event -- common/autotest_common.sh@10 -- # set +x 00:05:23.025 ************************************ 00:05:23.025 START TEST event_scheduler 00:05:23.025 ************************************ 00:05:23.025 18:36:23 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:23.025 * Looking for test storage... 00:05:23.025 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:23.025 18:36:23 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:23.025 18:36:23 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:05:23.025 18:36:23 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:23.025 18:36:23 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:23.025 18:36:23 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:23.025 18:36:23 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:23.025 18:36:23 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:23.025 18:36:23 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:23.025 18:36:23 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:23.025 18:36:23 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:23.025 18:36:23 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:23.025 18:36:23 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:23.025 18:36:23 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:23.026 18:36:23 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:23.026 18:36:23 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:23.026 18:36:23 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:23.026 18:36:23 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:23.026 18:36:23 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:23.026 18:36:23 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:23.026 18:36:23 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:23.026 18:36:23 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:23.026 18:36:23 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:23.026 18:36:23 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:23.026 18:36:23 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:23.026 18:36:23 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:23.026 18:36:23 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:23.026 18:36:23 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:23.026 18:36:23 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:23.026 18:36:23 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:23.026 18:36:23 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:23.026 18:36:23 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:23.026 18:36:23 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:23.026 18:36:23 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:23.026 18:36:23 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:23.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.026 --rc genhtml_branch_coverage=1 00:05:23.026 --rc genhtml_function_coverage=1 00:05:23.026 --rc genhtml_legend=1 00:05:23.026 --rc geninfo_all_blocks=1 00:05:23.026 --rc geninfo_unexecuted_blocks=1 00:05:23.026 00:05:23.026 ' 00:05:23.026 18:36:23 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:23.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.026 --rc genhtml_branch_coverage=1 00:05:23.026 --rc genhtml_function_coverage=1 00:05:23.026 --rc genhtml_legend=1 00:05:23.026 --rc geninfo_all_blocks=1 00:05:23.026 --rc geninfo_unexecuted_blocks=1 00:05:23.026 00:05:23.026 ' 00:05:23.026 18:36:23 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:23.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.026 --rc genhtml_branch_coverage=1 00:05:23.026 --rc genhtml_function_coverage=1 00:05:23.026 --rc genhtml_legend=1 00:05:23.026 --rc geninfo_all_blocks=1 00:05:23.026 --rc geninfo_unexecuted_blocks=1 00:05:23.026 00:05:23.026 ' 00:05:23.026 18:36:23 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:23.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.026 --rc genhtml_branch_coverage=1 00:05:23.026 --rc genhtml_function_coverage=1 00:05:23.026 --rc genhtml_legend=1 00:05:23.026 --rc geninfo_all_blocks=1 00:05:23.026 --rc geninfo_unexecuted_blocks=1 00:05:23.026 00:05:23.026 ' 00:05:23.026 18:36:23 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:23.026 18:36:23 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=72083 00:05:23.026 18:36:23 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:23.026 18:36:23 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:23.026 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:23.026 18:36:23 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 72083 00:05:23.026 18:36:23 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 72083 ']' 00:05:23.026 18:36:23 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:23.026 18:36:23 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:23.026 18:36:23 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:23.026 18:36:23 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:23.026 18:36:23 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:23.026 [2024-12-15 18:36:23.441724] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:05:23.026 [2024-12-15 18:36:23.441872] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72083 ] 00:05:23.285 [2024-12-15 18:36:23.614944] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:23.285 [2024-12-15 18:36:23.647767] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.285 [2024-12-15 18:36:23.647911] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:23.285 [2024-12-15 18:36:23.648086] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:05:23.285 [2024-12-15 18:36:23.647952] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:05:23.853 18:36:24 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:23.853 18:36:24 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:05:23.853 18:36:24 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:23.853 18:36:24 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:23.853 18:36:24 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:24.112 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:24.112 POWER: Cannot set governor of lcore 0 to userspace 00:05:24.112 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:24.112 POWER: Cannot set governor of lcore 0 to performance 00:05:24.112 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:24.112 POWER: Cannot set governor of lcore 0 to userspace 00:05:24.112 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:24.112 POWER: Cannot set governor of lcore 0 to userspace 00:05:24.112 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:24.112 POWER: Unable to set Power Management Environment for lcore 0 00:05:24.112 [2024-12-15 18:36:24.293228] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:05:24.112 [2024-12-15 18:36:24.293253] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:05:24.112 [2024-12-15 18:36:24.293268] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:24.112 [2024-12-15 18:36:24.293288] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:24.112 [2024-12-15 18:36:24.293297] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:24.112 [2024-12-15 18:36:24.293310] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:24.112 18:36:24 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:24.112 18:36:24 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:24.112 18:36:24 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:24.112 18:36:24 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:24.112 [2024-12-15 18:36:24.370684] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:24.112 18:36:24 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:24.112 18:36:24 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:24.112 18:36:24 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:24.112 18:36:24 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:24.112 18:36:24 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:24.112 ************************************ 00:05:24.112 START TEST scheduler_create_thread 00:05:24.112 ************************************ 00:05:24.112 18:36:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:05:24.112 18:36:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:24.112 18:36:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:24.112 18:36:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:24.112 2 00:05:24.112 18:36:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:24.112 18:36:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:24.112 18:36:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:24.112 18:36:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:24.112 3 00:05:24.112 18:36:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:24.112 18:36:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:24.112 18:36:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:24.112 18:36:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:24.112 4 00:05:24.112 18:36:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:24.112 18:36:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:24.112 18:36:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:24.112 18:36:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:24.112 5 00:05:24.112 18:36:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:24.112 18:36:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:24.112 18:36:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:24.112 18:36:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:24.112 6 00:05:24.112 18:36:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:24.112 18:36:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:24.112 18:36:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:24.112 18:36:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:24.112 7 00:05:24.112 18:36:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:24.112 18:36:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:24.112 18:36:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:24.112 18:36:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:24.112 8 00:05:24.112 18:36:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:24.112 18:36:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:24.112 18:36:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:24.113 18:36:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:24.113 9 00:05:24.113 18:36:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:24.113 18:36:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:24.113 18:36:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:24.113 18:36:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:24.681 10 00:05:24.681 18:36:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:24.681 18:36:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:24.681 18:36:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:24.681 18:36:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:26.056 18:36:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:26.056 18:36:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:26.056 18:36:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:26.056 18:36:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:26.056 18:36:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:26.991 18:36:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:26.991 18:36:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:26.991 18:36:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:26.991 18:36:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:27.557 18:36:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:27.557 18:36:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:27.557 18:36:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:27.557 18:36:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:27.557 18:36:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:28.490 ************************************ 00:05:28.490 END TEST scheduler_create_thread 00:05:28.490 ************************************ 00:05:28.490 18:36:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:28.490 00:05:28.490 real 0m4.211s 00:05:28.490 user 0m0.030s 00:05:28.490 sys 0m0.007s 00:05:28.490 18:36:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:28.490 18:36:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:28.490 18:36:28 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:28.490 18:36:28 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 72083 00:05:28.490 18:36:28 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 72083 ']' 00:05:28.490 18:36:28 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 72083 00:05:28.490 18:36:28 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:05:28.490 18:36:28 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:28.490 18:36:28 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72083 00:05:28.490 18:36:28 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:28.490 killing process with pid 72083 00:05:28.490 18:36:28 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:28.490 18:36:28 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72083' 00:05:28.490 18:36:28 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 72083 00:05:28.490 18:36:28 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 72083 00:05:28.490 [2024-12-15 18:36:28.871851] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:28.769 00:05:28.769 real 0m6.003s 00:05:28.769 user 0m12.970s 00:05:28.769 sys 0m0.466s 00:05:28.769 18:36:29 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:28.769 ************************************ 00:05:28.769 END TEST event_scheduler 00:05:28.769 ************************************ 00:05:28.769 18:36:29 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:28.769 18:36:29 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:28.769 18:36:29 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:28.769 18:36:29 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:28.769 18:36:29 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:28.769 18:36:29 event -- common/autotest_common.sh@10 -- # set +x 00:05:29.032 ************************************ 00:05:29.032 START TEST app_repeat 00:05:29.032 ************************************ 00:05:29.032 18:36:29 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:05:29.032 18:36:29 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:29.032 18:36:29 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:29.032 18:36:29 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:29.032 18:36:29 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:29.032 18:36:29 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:29.032 18:36:29 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:29.032 18:36:29 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:29.032 18:36:29 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:29.032 18:36:29 event.app_repeat -- event/event.sh@19 -- # repeat_pid=72200 00:05:29.032 18:36:29 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:29.032 18:36:29 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 72200' 00:05:29.032 Process app_repeat pid: 72200 00:05:29.032 18:36:29 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:29.032 18:36:29 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:29.032 spdk_app_start Round 0 00:05:29.032 18:36:29 event.app_repeat -- event/event.sh@25 -- # waitforlisten 72200 /var/tmp/spdk-nbd.sock 00:05:29.032 18:36:29 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 72200 ']' 00:05:29.032 18:36:29 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:29.032 18:36:29 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:29.032 18:36:29 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:29.032 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:29.032 18:36:29 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:29.032 18:36:29 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:29.032 [2024-12-15 18:36:29.258004] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:05:29.032 [2024-12-15 18:36:29.258269] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72200 ] 00:05:29.032 [2024-12-15 18:36:29.419588] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:29.032 [2024-12-15 18:36:29.449824] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.032 [2024-12-15 18:36:29.449956] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:29.968 18:36:30 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:29.968 18:36:30 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:29.968 18:36:30 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:29.968 Malloc0 00:05:29.968 18:36:30 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:30.227 Malloc1 00:05:30.227 18:36:30 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:30.227 18:36:30 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:30.227 18:36:30 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:30.227 18:36:30 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:30.227 18:36:30 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:30.227 18:36:30 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:30.227 18:36:30 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:30.227 18:36:30 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:30.227 18:36:30 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:30.227 18:36:30 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:30.227 18:36:30 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:30.227 18:36:30 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:30.227 18:36:30 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:30.227 18:36:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:30.227 18:36:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:30.227 18:36:30 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:30.486 /dev/nbd0 00:05:30.486 18:36:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:30.486 18:36:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:30.486 18:36:30 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:30.486 18:36:30 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:30.486 18:36:30 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:30.486 18:36:30 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:30.486 18:36:30 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:30.486 18:36:30 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:30.486 18:36:30 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:30.486 18:36:30 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:30.486 18:36:30 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:30.486 1+0 records in 00:05:30.486 1+0 records out 00:05:30.486 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000295548 s, 13.9 MB/s 00:05:30.486 18:36:30 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:30.486 18:36:30 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:30.486 18:36:30 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:30.486 18:36:30 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:30.486 18:36:30 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:30.486 18:36:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:30.486 18:36:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:30.486 18:36:30 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:30.745 /dev/nbd1 00:05:30.745 18:36:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:30.745 18:36:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:30.745 18:36:31 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:30.745 18:36:31 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:30.745 18:36:31 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:30.745 18:36:31 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:30.745 18:36:31 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:30.745 18:36:31 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:30.745 18:36:31 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:30.745 18:36:31 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:30.745 18:36:31 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:30.745 1+0 records in 00:05:30.745 1+0 records out 00:05:30.745 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000272366 s, 15.0 MB/s 00:05:30.745 18:36:31 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:30.745 18:36:31 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:30.745 18:36:31 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:30.745 18:36:31 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:30.745 18:36:31 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:30.745 18:36:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:30.745 18:36:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:30.745 18:36:31 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:30.745 18:36:31 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:30.745 18:36:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:31.005 18:36:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:31.005 { 00:05:31.005 "nbd_device": "/dev/nbd0", 00:05:31.005 "bdev_name": "Malloc0" 00:05:31.005 }, 00:05:31.005 { 00:05:31.005 "nbd_device": "/dev/nbd1", 00:05:31.005 "bdev_name": "Malloc1" 00:05:31.005 } 00:05:31.005 ]' 00:05:31.005 18:36:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:31.005 { 00:05:31.005 "nbd_device": "/dev/nbd0", 00:05:31.005 "bdev_name": "Malloc0" 00:05:31.005 }, 00:05:31.005 { 00:05:31.005 "nbd_device": "/dev/nbd1", 00:05:31.005 "bdev_name": "Malloc1" 00:05:31.005 } 00:05:31.005 ]' 00:05:31.005 18:36:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:31.005 18:36:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:31.005 /dev/nbd1' 00:05:31.005 18:36:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:31.005 /dev/nbd1' 00:05:31.005 18:36:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:31.005 18:36:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:31.005 18:36:31 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:31.005 18:36:31 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:31.005 18:36:31 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:31.005 18:36:31 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:31.005 18:36:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:31.005 18:36:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:31.005 18:36:31 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:31.005 18:36:31 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:31.005 18:36:31 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:31.005 18:36:31 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:31.005 256+0 records in 00:05:31.005 256+0 records out 00:05:31.005 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00537134 s, 195 MB/s 00:05:31.005 18:36:31 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:31.005 18:36:31 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:31.005 256+0 records in 00:05:31.005 256+0 records out 00:05:31.005 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0176844 s, 59.3 MB/s 00:05:31.005 18:36:31 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:31.005 18:36:31 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:31.005 256+0 records in 00:05:31.005 256+0 records out 00:05:31.005 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0234052 s, 44.8 MB/s 00:05:31.005 18:36:31 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:31.005 18:36:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:31.005 18:36:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:31.005 18:36:31 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:31.005 18:36:31 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:31.005 18:36:31 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:31.005 18:36:31 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:31.005 18:36:31 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:31.005 18:36:31 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:31.264 18:36:31 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:31.264 18:36:31 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:31.264 18:36:31 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:31.264 18:36:31 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:31.264 18:36:31 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:31.264 18:36:31 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:31.264 18:36:31 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:31.264 18:36:31 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:31.264 18:36:31 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:31.264 18:36:31 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:31.264 18:36:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:31.264 18:36:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:31.264 18:36:31 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:31.264 18:36:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:31.264 18:36:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:31.264 18:36:31 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:31.264 18:36:31 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:31.264 18:36:31 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:31.264 18:36:31 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:31.264 18:36:31 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:31.523 18:36:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:31.523 18:36:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:31.523 18:36:31 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:31.523 18:36:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:31.523 18:36:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:31.523 18:36:31 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:31.523 18:36:31 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:31.523 18:36:31 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:31.523 18:36:31 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:31.523 18:36:31 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:31.523 18:36:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:31.781 18:36:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:31.781 18:36:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:31.781 18:36:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:31.781 18:36:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:31.781 18:36:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:31.781 18:36:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:31.781 18:36:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:31.781 18:36:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:31.781 18:36:32 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:31.781 18:36:32 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:31.781 18:36:32 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:31.781 18:36:32 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:31.781 18:36:32 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:32.039 18:36:32 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:32.298 [2024-12-15 18:36:32.587245] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:32.298 [2024-12-15 18:36:32.616341] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.298 [2024-12-15 18:36:32.616341] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:32.298 [2024-12-15 18:36:32.659345] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:32.298 [2024-12-15 18:36:32.659411] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:35.588 spdk_app_start Round 1 00:05:35.588 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:35.588 18:36:35 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:35.588 18:36:35 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:35.588 18:36:35 event.app_repeat -- event/event.sh@25 -- # waitforlisten 72200 /var/tmp/spdk-nbd.sock 00:05:35.588 18:36:35 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 72200 ']' 00:05:35.588 18:36:35 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:35.588 18:36:35 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:35.588 18:36:35 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:35.588 18:36:35 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:35.588 18:36:35 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:35.588 18:36:35 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:35.588 18:36:35 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:35.588 18:36:35 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:35.588 Malloc0 00:05:35.588 18:36:35 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:35.846 Malloc1 00:05:35.847 18:36:36 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:35.847 18:36:36 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:35.847 18:36:36 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:35.847 18:36:36 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:35.847 18:36:36 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:35.847 18:36:36 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:35.847 18:36:36 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:35.847 18:36:36 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:35.847 18:36:36 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:35.847 18:36:36 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:35.847 18:36:36 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:35.847 18:36:36 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:35.847 18:36:36 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:35.847 18:36:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:35.847 18:36:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:35.847 18:36:36 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:36.105 /dev/nbd0 00:05:36.105 18:36:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:36.105 18:36:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:36.105 18:36:36 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:36.105 18:36:36 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:36.105 18:36:36 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:36.105 18:36:36 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:36.105 18:36:36 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:36.105 18:36:36 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:36.105 18:36:36 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:36.105 18:36:36 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:36.105 18:36:36 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:36.105 1+0 records in 00:05:36.105 1+0 records out 00:05:36.105 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000440505 s, 9.3 MB/s 00:05:36.105 18:36:36 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:36.105 18:36:36 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:36.105 18:36:36 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:36.105 18:36:36 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:36.105 18:36:36 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:36.105 18:36:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:36.105 18:36:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:36.105 18:36:36 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:36.364 /dev/nbd1 00:05:36.364 18:36:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:36.364 18:36:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:36.364 18:36:36 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:36.364 18:36:36 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:36.364 18:36:36 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:36.364 18:36:36 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:36.364 18:36:36 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:36.364 18:36:36 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:36.364 18:36:36 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:36.364 18:36:36 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:36.364 18:36:36 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:36.364 1+0 records in 00:05:36.364 1+0 records out 00:05:36.364 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000402007 s, 10.2 MB/s 00:05:36.364 18:36:36 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:36.364 18:36:36 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:36.364 18:36:36 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:36.364 18:36:36 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:36.364 18:36:36 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:36.364 18:36:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:36.364 18:36:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:36.364 18:36:36 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:36.364 18:36:36 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:36.364 18:36:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:36.624 18:36:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:36.624 { 00:05:36.624 "nbd_device": "/dev/nbd0", 00:05:36.624 "bdev_name": "Malloc0" 00:05:36.624 }, 00:05:36.624 { 00:05:36.624 "nbd_device": "/dev/nbd1", 00:05:36.624 "bdev_name": "Malloc1" 00:05:36.624 } 00:05:36.624 ]' 00:05:36.624 18:36:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:36.624 18:36:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:36.624 { 00:05:36.624 "nbd_device": "/dev/nbd0", 00:05:36.624 "bdev_name": "Malloc0" 00:05:36.624 }, 00:05:36.624 { 00:05:36.624 "nbd_device": "/dev/nbd1", 00:05:36.624 "bdev_name": "Malloc1" 00:05:36.624 } 00:05:36.624 ]' 00:05:36.624 18:36:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:36.624 /dev/nbd1' 00:05:36.624 18:36:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:36.624 /dev/nbd1' 00:05:36.624 18:36:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:36.624 18:36:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:36.624 18:36:36 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:36.624 18:36:36 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:36.624 18:36:36 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:36.624 18:36:36 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:36.624 18:36:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:36.624 18:36:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:36.624 18:36:36 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:36.624 18:36:36 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:36.624 18:36:36 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:36.624 18:36:36 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:36.624 256+0 records in 00:05:36.624 256+0 records out 00:05:36.624 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0129662 s, 80.9 MB/s 00:05:36.624 18:36:36 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:36.624 18:36:36 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:36.624 256+0 records in 00:05:36.624 256+0 records out 00:05:36.624 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0209421 s, 50.1 MB/s 00:05:36.624 18:36:36 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:36.624 18:36:36 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:36.624 256+0 records in 00:05:36.624 256+0 records out 00:05:36.624 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0222046 s, 47.2 MB/s 00:05:36.624 18:36:36 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:36.624 18:36:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:36.624 18:36:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:36.624 18:36:36 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:36.624 18:36:36 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:36.624 18:36:36 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:36.624 18:36:36 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:36.624 18:36:36 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:36.624 18:36:36 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:36.624 18:36:36 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:36.624 18:36:36 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:36.624 18:36:36 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:36.624 18:36:36 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:36.624 18:36:36 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:36.624 18:36:36 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:36.624 18:36:36 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:36.624 18:36:36 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:36.624 18:36:36 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:36.624 18:36:36 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:36.884 18:36:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:36.884 18:36:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:36.884 18:36:37 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:36.884 18:36:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:36.884 18:36:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:36.884 18:36:37 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:36.884 18:36:37 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:36.884 18:36:37 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:36.884 18:36:37 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:36.884 18:36:37 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:37.144 18:36:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:37.144 18:36:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:37.144 18:36:37 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:37.144 18:36:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:37.144 18:36:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:37.144 18:36:37 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:37.144 18:36:37 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:37.144 18:36:37 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:37.144 18:36:37 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:37.144 18:36:37 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:37.144 18:36:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:37.403 18:36:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:37.403 18:36:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:37.403 18:36:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:37.403 18:36:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:37.403 18:36:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:37.403 18:36:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:37.403 18:36:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:37.403 18:36:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:37.403 18:36:37 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:37.403 18:36:37 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:37.403 18:36:37 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:37.403 18:36:37 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:37.403 18:36:37 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:37.669 18:36:37 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:37.669 [2024-12-15 18:36:38.032448] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:37.669 [2024-12-15 18:36:38.062105] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.669 [2024-12-15 18:36:38.062124] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:37.937 [2024-12-15 18:36:38.105779] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:37.937 [2024-12-15 18:36:38.105870] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:40.477 spdk_app_start Round 2 00:05:40.477 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:40.477 18:36:40 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:40.477 18:36:40 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:40.477 18:36:40 event.app_repeat -- event/event.sh@25 -- # waitforlisten 72200 /var/tmp/spdk-nbd.sock 00:05:40.477 18:36:40 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 72200 ']' 00:05:40.477 18:36:40 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:40.477 18:36:40 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:40.477 18:36:40 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:40.477 18:36:40 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:40.478 18:36:40 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:40.737 18:36:41 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:40.737 18:36:41 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:40.738 18:36:41 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:40.997 Malloc0 00:05:40.997 18:36:41 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:41.257 Malloc1 00:05:41.257 18:36:41 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:41.257 18:36:41 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:41.257 18:36:41 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:41.257 18:36:41 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:41.257 18:36:41 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:41.257 18:36:41 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:41.257 18:36:41 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:41.257 18:36:41 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:41.257 18:36:41 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:41.257 18:36:41 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:41.257 18:36:41 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:41.257 18:36:41 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:41.257 18:36:41 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:41.257 18:36:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:41.257 18:36:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:41.257 18:36:41 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:41.517 /dev/nbd0 00:05:41.518 18:36:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:41.518 18:36:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:41.518 18:36:41 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:41.518 18:36:41 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:41.518 18:36:41 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:41.518 18:36:41 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:41.518 18:36:41 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:41.518 18:36:41 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:41.518 18:36:41 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:41.518 18:36:41 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:41.518 18:36:41 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:41.518 1+0 records in 00:05:41.518 1+0 records out 00:05:41.518 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000512891 s, 8.0 MB/s 00:05:41.518 18:36:41 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:41.518 18:36:41 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:41.518 18:36:41 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:41.518 18:36:41 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:41.518 18:36:41 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:41.518 18:36:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:41.518 18:36:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:41.518 18:36:41 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:41.777 /dev/nbd1 00:05:41.777 18:36:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:41.777 18:36:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:41.777 18:36:42 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:41.777 18:36:42 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:41.777 18:36:42 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:41.777 18:36:42 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:41.777 18:36:42 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:41.777 18:36:42 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:41.777 18:36:42 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:41.777 18:36:42 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:41.777 18:36:42 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:41.777 1+0 records in 00:05:41.777 1+0 records out 00:05:41.777 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000347147 s, 11.8 MB/s 00:05:41.777 18:36:42 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:41.777 18:36:42 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:41.777 18:36:42 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:41.777 18:36:42 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:41.777 18:36:42 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:41.777 18:36:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:41.777 18:36:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:41.777 18:36:42 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:41.777 18:36:42 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:41.777 18:36:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:42.038 18:36:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:42.038 { 00:05:42.038 "nbd_device": "/dev/nbd0", 00:05:42.038 "bdev_name": "Malloc0" 00:05:42.038 }, 00:05:42.038 { 00:05:42.038 "nbd_device": "/dev/nbd1", 00:05:42.038 "bdev_name": "Malloc1" 00:05:42.038 } 00:05:42.038 ]' 00:05:42.038 18:36:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:42.038 18:36:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:42.038 { 00:05:42.038 "nbd_device": "/dev/nbd0", 00:05:42.038 "bdev_name": "Malloc0" 00:05:42.038 }, 00:05:42.038 { 00:05:42.038 "nbd_device": "/dev/nbd1", 00:05:42.038 "bdev_name": "Malloc1" 00:05:42.038 } 00:05:42.038 ]' 00:05:42.038 18:36:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:42.038 /dev/nbd1' 00:05:42.038 18:36:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:42.038 18:36:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:42.038 /dev/nbd1' 00:05:42.038 18:36:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:42.038 18:36:42 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:42.038 18:36:42 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:42.038 18:36:42 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:42.038 18:36:42 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:42.038 18:36:42 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:42.038 18:36:42 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:42.038 18:36:42 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:42.038 18:36:42 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:42.038 18:36:42 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:42.038 18:36:42 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:42.038 256+0 records in 00:05:42.038 256+0 records out 00:05:42.038 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0130757 s, 80.2 MB/s 00:05:42.038 18:36:42 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:42.038 18:36:42 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:42.038 256+0 records in 00:05:42.038 256+0 records out 00:05:42.038 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0201432 s, 52.1 MB/s 00:05:42.038 18:36:42 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:42.038 18:36:42 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:42.038 256+0 records in 00:05:42.038 256+0 records out 00:05:42.038 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.018207 s, 57.6 MB/s 00:05:42.038 18:36:42 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:42.038 18:36:42 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:42.038 18:36:42 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:42.038 18:36:42 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:42.038 18:36:42 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:42.038 18:36:42 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:42.038 18:36:42 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:42.038 18:36:42 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:42.038 18:36:42 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:42.038 18:36:42 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:42.038 18:36:42 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:42.038 18:36:42 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:42.038 18:36:42 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:42.038 18:36:42 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:42.038 18:36:42 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:42.038 18:36:42 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:42.038 18:36:42 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:42.038 18:36:42 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:42.038 18:36:42 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:42.298 18:36:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:42.298 18:36:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:42.298 18:36:42 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:42.298 18:36:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:42.298 18:36:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:42.298 18:36:42 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:42.298 18:36:42 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:42.298 18:36:42 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:42.298 18:36:42 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:42.298 18:36:42 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:42.557 18:36:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:42.557 18:36:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:42.557 18:36:42 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:42.557 18:36:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:42.557 18:36:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:42.557 18:36:42 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:42.557 18:36:42 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:42.557 18:36:42 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:42.557 18:36:42 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:42.557 18:36:42 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:42.557 18:36:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:42.816 18:36:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:42.816 18:36:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:42.816 18:36:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:42.816 18:36:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:42.816 18:36:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:42.816 18:36:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:42.816 18:36:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:42.816 18:36:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:42.816 18:36:43 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:42.816 18:36:43 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:42.816 18:36:43 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:42.816 18:36:43 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:42.816 18:36:43 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:43.076 18:36:43 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:43.076 [2024-12-15 18:36:43.462775] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:43.076 [2024-12-15 18:36:43.492356] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.076 [2024-12-15 18:36:43.492359] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:43.334 [2024-12-15 18:36:43.535918] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:43.334 [2024-12-15 18:36:43.536003] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:46.621 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:46.621 18:36:46 event.app_repeat -- event/event.sh@38 -- # waitforlisten 72200 /var/tmp/spdk-nbd.sock 00:05:46.621 18:36:46 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 72200 ']' 00:05:46.621 18:36:46 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:46.621 18:36:46 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:46.621 18:36:46 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:46.621 18:36:46 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:46.621 18:36:46 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:46.621 18:36:46 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:46.621 18:36:46 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:46.621 18:36:46 event.app_repeat -- event/event.sh@39 -- # killprocess 72200 00:05:46.621 18:36:46 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 72200 ']' 00:05:46.621 18:36:46 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 72200 00:05:46.621 18:36:46 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:05:46.621 18:36:46 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:46.621 18:36:46 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72200 00:05:46.621 killing process with pid 72200 00:05:46.621 18:36:46 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:46.621 18:36:46 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:46.621 18:36:46 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72200' 00:05:46.621 18:36:46 event.app_repeat -- common/autotest_common.sh@973 -- # kill 72200 00:05:46.622 18:36:46 event.app_repeat -- common/autotest_common.sh@978 -- # wait 72200 00:05:46.622 spdk_app_start is called in Round 0. 00:05:46.622 Shutdown signal received, stop current app iteration 00:05:46.622 Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 reinitialization... 00:05:46.622 spdk_app_start is called in Round 1. 00:05:46.622 Shutdown signal received, stop current app iteration 00:05:46.622 Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 reinitialization... 00:05:46.622 spdk_app_start is called in Round 2. 00:05:46.622 Shutdown signal received, stop current app iteration 00:05:46.622 Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 reinitialization... 00:05:46.622 spdk_app_start is called in Round 3. 00:05:46.622 Shutdown signal received, stop current app iteration 00:05:46.622 18:36:46 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:46.622 18:36:46 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:46.622 00:05:46.622 real 0m17.564s 00:05:46.622 user 0m38.965s 00:05:46.622 sys 0m2.660s 00:05:46.622 18:36:46 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:46.622 18:36:46 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:46.622 ************************************ 00:05:46.622 END TEST app_repeat 00:05:46.622 ************************************ 00:05:46.622 18:36:46 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:46.622 18:36:46 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:46.622 18:36:46 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:46.622 18:36:46 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:46.622 18:36:46 event -- common/autotest_common.sh@10 -- # set +x 00:05:46.622 ************************************ 00:05:46.622 START TEST cpu_locks 00:05:46.622 ************************************ 00:05:46.622 18:36:46 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:46.622 * Looking for test storage... 00:05:46.622 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:46.622 18:36:46 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:46.622 18:36:46 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:05:46.622 18:36:46 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:46.622 18:36:47 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:46.622 18:36:47 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:46.622 18:36:47 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:46.622 18:36:47 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:46.622 18:36:47 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:46.622 18:36:47 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:46.622 18:36:47 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:46.622 18:36:47 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:46.622 18:36:47 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:46.622 18:36:47 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:46.622 18:36:47 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:46.622 18:36:47 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:46.622 18:36:47 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:46.622 18:36:47 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:46.622 18:36:47 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:46.622 18:36:47 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:46.622 18:36:47 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:46.622 18:36:47 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:46.622 18:36:47 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:46.622 18:36:47 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:46.622 18:36:47 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:46.622 18:36:47 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:46.622 18:36:47 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:46.622 18:36:47 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:46.622 18:36:47 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:46.622 18:36:47 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:46.622 18:36:47 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:46.622 18:36:47 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:46.622 18:36:47 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:46.622 18:36:47 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:46.622 18:36:47 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:46.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.622 --rc genhtml_branch_coverage=1 00:05:46.622 --rc genhtml_function_coverage=1 00:05:46.622 --rc genhtml_legend=1 00:05:46.622 --rc geninfo_all_blocks=1 00:05:46.622 --rc geninfo_unexecuted_blocks=1 00:05:46.622 00:05:46.622 ' 00:05:46.622 18:36:47 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:46.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.622 --rc genhtml_branch_coverage=1 00:05:46.622 --rc genhtml_function_coverage=1 00:05:46.622 --rc genhtml_legend=1 00:05:46.622 --rc geninfo_all_blocks=1 00:05:46.622 --rc geninfo_unexecuted_blocks=1 00:05:46.622 00:05:46.622 ' 00:05:46.622 18:36:47 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:46.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.622 --rc genhtml_branch_coverage=1 00:05:46.622 --rc genhtml_function_coverage=1 00:05:46.622 --rc genhtml_legend=1 00:05:46.622 --rc geninfo_all_blocks=1 00:05:46.622 --rc geninfo_unexecuted_blocks=1 00:05:46.622 00:05:46.622 ' 00:05:46.622 18:36:47 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:46.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.622 --rc genhtml_branch_coverage=1 00:05:46.622 --rc genhtml_function_coverage=1 00:05:46.622 --rc genhtml_legend=1 00:05:46.622 --rc geninfo_all_blocks=1 00:05:46.622 --rc geninfo_unexecuted_blocks=1 00:05:46.622 00:05:46.622 ' 00:05:46.622 18:36:47 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:46.622 18:36:47 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:46.881 18:36:47 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:46.881 18:36:47 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:46.881 18:36:47 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:46.881 18:36:47 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:46.881 18:36:47 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:46.881 ************************************ 00:05:46.881 START TEST default_locks 00:05:46.881 ************************************ 00:05:46.881 18:36:47 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:05:46.881 18:36:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=72625 00:05:46.881 18:36:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:46.881 18:36:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 72625 00:05:46.881 18:36:47 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 72625 ']' 00:05:46.881 18:36:47 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:46.881 18:36:47 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:46.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:46.881 18:36:47 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:46.881 18:36:47 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:46.881 18:36:47 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:46.881 [2024-12-15 18:36:47.173320] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:05:46.881 [2024-12-15 18:36:47.173442] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72625 ] 00:05:47.140 [2024-12-15 18:36:47.343043] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.140 [2024-12-15 18:36:47.371602] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.723 18:36:48 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:47.723 18:36:48 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:05:47.723 18:36:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 72625 00:05:47.723 18:36:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 72625 00:05:47.723 18:36:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:47.982 18:36:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 72625 00:05:47.982 18:36:48 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 72625 ']' 00:05:47.982 18:36:48 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 72625 00:05:47.982 18:36:48 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:05:47.982 18:36:48 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:47.982 18:36:48 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72625 00:05:47.982 18:36:48 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:47.983 18:36:48 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:47.983 killing process with pid 72625 00:05:47.983 18:36:48 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72625' 00:05:47.983 18:36:48 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 72625 00:05:47.983 18:36:48 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 72625 00:05:48.553 18:36:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 72625 00:05:48.553 18:36:48 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:05:48.553 18:36:48 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 72625 00:05:48.553 18:36:48 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:48.553 18:36:48 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:48.553 18:36:48 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:48.553 18:36:48 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:48.553 18:36:48 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 72625 00:05:48.553 18:36:48 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 72625 ']' 00:05:48.553 18:36:48 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:48.553 18:36:48 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:48.553 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:48.553 18:36:48 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:48.553 18:36:48 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:48.553 18:36:48 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:48.553 ERROR: process (pid: 72625) is no longer running 00:05:48.553 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (72625) - No such process 00:05:48.553 18:36:48 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:48.553 18:36:48 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:05:48.553 18:36:48 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:05:48.553 18:36:48 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:48.553 18:36:48 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:48.553 18:36:48 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:48.553 18:36:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:48.553 18:36:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:48.553 18:36:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:48.553 18:36:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:48.553 00:05:48.553 real 0m1.653s 00:05:48.553 user 0m1.643s 00:05:48.553 sys 0m0.558s 00:05:48.553 18:36:48 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:48.553 ************************************ 00:05:48.553 END TEST default_locks 00:05:48.553 ************************************ 00:05:48.553 18:36:48 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:48.553 18:36:48 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:48.553 18:36:48 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:48.553 18:36:48 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:48.553 18:36:48 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:48.553 ************************************ 00:05:48.553 START TEST default_locks_via_rpc 00:05:48.553 ************************************ 00:05:48.553 18:36:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:05:48.553 18:36:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=72675 00:05:48.553 18:36:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:48.553 18:36:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 72675 00:05:48.553 18:36:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 72675 ']' 00:05:48.553 18:36:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:48.553 18:36:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:48.553 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:48.553 18:36:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:48.553 18:36:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:48.553 18:36:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:48.553 [2024-12-15 18:36:48.892762] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:05:48.553 [2024-12-15 18:36:48.892993] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72675 ] 00:05:48.813 [2024-12-15 18:36:49.064066] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.813 [2024-12-15 18:36:49.091822] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.380 18:36:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:49.380 18:36:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:49.380 18:36:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:49.380 18:36:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.380 18:36:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:49.380 18:36:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.380 18:36:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:49.380 18:36:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:49.380 18:36:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:49.380 18:36:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:49.380 18:36:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:49.380 18:36:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.380 18:36:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:49.380 18:36:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.381 18:36:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 72675 00:05:49.381 18:36:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 72675 00:05:49.381 18:36:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:49.949 18:36:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 72675 00:05:49.949 18:36:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 72675 ']' 00:05:49.949 18:36:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 72675 00:05:49.949 18:36:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:05:49.949 18:36:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:49.949 18:36:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72675 00:05:49.949 killing process with pid 72675 00:05:49.949 18:36:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:49.949 18:36:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:49.949 18:36:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72675' 00:05:49.949 18:36:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 72675 00:05:49.949 18:36:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 72675 00:05:50.209 ************************************ 00:05:50.209 END TEST default_locks_via_rpc 00:05:50.209 ************************************ 00:05:50.209 00:05:50.209 real 0m1.839s 00:05:50.209 user 0m1.839s 00:05:50.209 sys 0m0.635s 00:05:50.209 18:36:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:50.209 18:36:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:50.469 18:36:50 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:50.469 18:36:50 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:50.469 18:36:50 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:50.469 18:36:50 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:50.469 ************************************ 00:05:50.469 START TEST non_locking_app_on_locked_coremask 00:05:50.469 ************************************ 00:05:50.469 18:36:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:05:50.469 18:36:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=72728 00:05:50.469 18:36:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:50.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:50.469 18:36:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 72728 /var/tmp/spdk.sock 00:05:50.469 18:36:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 72728 ']' 00:05:50.469 18:36:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:50.469 18:36:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:50.469 18:36:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:50.469 18:36:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:50.469 18:36:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:50.469 [2024-12-15 18:36:50.799152] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:05:50.469 [2024-12-15 18:36:50.799291] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72728 ] 00:05:50.730 [2024-12-15 18:36:50.973886] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.730 [2024-12-15 18:36:51.003223] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.299 18:36:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:51.299 18:36:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:51.299 18:36:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=72744 00:05:51.299 18:36:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 72744 /var/tmp/spdk2.sock 00:05:51.299 18:36:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:51.299 18:36:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 72744 ']' 00:05:51.299 18:36:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:51.299 18:36:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:51.299 18:36:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:51.299 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:51.299 18:36:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:51.299 18:36:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:51.558 [2024-12-15 18:36:51.748787] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:05:51.558 [2024-12-15 18:36:51.749044] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72744 ] 00:05:51.558 [2024-12-15 18:36:51.919884] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:51.558 [2024-12-15 18:36:51.919971] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.558 [2024-12-15 18:36:51.976470] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.497 18:36:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:52.497 18:36:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:52.497 18:36:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 72728 00:05:52.497 18:36:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 72728 00:05:52.497 18:36:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:52.757 18:36:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 72728 00:05:52.757 18:36:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 72728 ']' 00:05:52.757 18:36:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 72728 00:05:52.757 18:36:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:52.757 18:36:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:52.757 18:36:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72728 00:05:52.757 killing process with pid 72728 00:05:52.757 18:36:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:52.757 18:36:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:52.757 18:36:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72728' 00:05:52.757 18:36:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 72728 00:05:52.757 18:36:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 72728 00:05:53.697 18:36:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 72744 00:05:53.697 18:36:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 72744 ']' 00:05:53.697 18:36:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 72744 00:05:53.697 18:36:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:53.697 18:36:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:53.697 18:36:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72744 00:05:53.697 killing process with pid 72744 00:05:53.697 18:36:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:53.697 18:36:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:53.697 18:36:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72744' 00:05:53.697 18:36:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 72744 00:05:53.697 18:36:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 72744 00:05:53.956 00:05:53.956 real 0m3.478s 00:05:53.956 user 0m3.698s 00:05:53.956 sys 0m1.080s 00:05:53.956 18:36:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:53.956 18:36:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:53.956 ************************************ 00:05:53.956 END TEST non_locking_app_on_locked_coremask 00:05:53.956 ************************************ 00:05:53.956 18:36:54 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:53.956 18:36:54 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:53.956 18:36:54 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:53.956 18:36:54 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:53.956 ************************************ 00:05:53.956 START TEST locking_app_on_unlocked_coremask 00:05:53.956 ************************************ 00:05:53.956 18:36:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:05:53.956 18:36:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=72807 00:05:53.956 18:36:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:53.956 18:36:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 72807 /var/tmp/spdk.sock 00:05:53.956 18:36:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 72807 ']' 00:05:53.956 18:36:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:53.956 18:36:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:53.957 18:36:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:53.957 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:53.957 18:36:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:53.957 18:36:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:53.957 [2024-12-15 18:36:54.344716] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:05:53.957 [2024-12-15 18:36:54.344934] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72807 ] 00:05:54.216 [2024-12-15 18:36:54.517913] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:54.216 [2024-12-15 18:36:54.518053] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.216 [2024-12-15 18:36:54.545837] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.787 18:36:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:54.787 18:36:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:54.787 18:36:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:54.787 18:36:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=72818 00:05:54.787 18:36:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 72818 /var/tmp/spdk2.sock 00:05:54.787 18:36:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 72818 ']' 00:05:54.787 18:36:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:54.787 18:36:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:54.787 18:36:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:54.787 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:54.787 18:36:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:54.787 18:36:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:55.054 [2024-12-15 18:36:55.263267] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:05:55.054 [2024-12-15 18:36:55.263507] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72818 ] 00:05:55.054 [2024-12-15 18:36:55.435600] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.326 [2024-12-15 18:36:55.516007] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.896 18:36:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:55.896 18:36:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:55.896 18:36:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 72818 00:05:55.896 18:36:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 72818 00:05:55.896 18:36:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:56.465 18:36:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 72807 00:05:56.465 18:36:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 72807 ']' 00:05:56.465 18:36:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 72807 00:05:56.465 18:36:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:56.465 18:36:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:56.465 18:36:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72807 00:05:56.465 killing process with pid 72807 00:05:56.465 18:36:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:56.465 18:36:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:56.465 18:36:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72807' 00:05:56.465 18:36:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 72807 00:05:56.465 18:36:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 72807 00:05:57.845 18:36:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 72818 00:05:57.845 18:36:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 72818 ']' 00:05:57.845 18:36:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 72818 00:05:57.845 18:36:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:57.845 18:36:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:57.845 18:36:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72818 00:05:57.845 18:36:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:57.845 killing process with pid 72818 00:05:57.845 18:36:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:57.845 18:36:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72818' 00:05:57.845 18:36:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 72818 00:05:57.845 18:36:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 72818 00:05:58.415 ************************************ 00:05:58.415 END TEST locking_app_on_unlocked_coremask 00:05:58.415 ************************************ 00:05:58.415 00:05:58.415 real 0m4.364s 00:05:58.415 user 0m4.393s 00:05:58.415 sys 0m1.256s 00:05:58.415 18:36:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:58.415 18:36:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:58.415 18:36:58 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:58.415 18:36:58 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:58.415 18:36:58 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:58.415 18:36:58 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:58.415 ************************************ 00:05:58.415 START TEST locking_app_on_locked_coremask 00:05:58.415 ************************************ 00:05:58.415 18:36:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:05:58.415 18:36:58 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=72898 00:05:58.415 18:36:58 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:58.415 18:36:58 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 72898 /var/tmp/spdk.sock 00:05:58.415 18:36:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 72898 ']' 00:05:58.415 18:36:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:58.415 18:36:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:58.415 18:36:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:58.415 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:58.415 18:36:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:58.415 18:36:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:58.415 [2024-12-15 18:36:58.776820] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:05:58.415 [2024-12-15 18:36:58.777033] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72898 ] 00:05:58.674 [2024-12-15 18:36:58.947159] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.674 [2024-12-15 18:36:58.986611] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.244 18:36:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:59.244 18:36:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:59.244 18:36:59 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=72914 00:05:59.244 18:36:59 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 72914 /var/tmp/spdk2.sock 00:05:59.244 18:36:59 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:59.244 18:36:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:59.244 18:36:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 72914 /var/tmp/spdk2.sock 00:05:59.244 18:36:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:59.244 18:36:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:59.244 18:36:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:59.244 18:36:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:59.244 18:36:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 72914 /var/tmp/spdk2.sock 00:05:59.244 18:36:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 72914 ']' 00:05:59.244 18:36:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:59.244 18:36:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:59.244 18:36:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:59.244 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:59.244 18:36:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:59.244 18:36:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:59.503 [2024-12-15 18:36:59.697346] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:05:59.503 [2024-12-15 18:36:59.697569] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72914 ] 00:05:59.503 [2024-12-15 18:36:59.863824] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 72898 has claimed it. 00:05:59.503 [2024-12-15 18:36:59.863914] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:00.072 ERROR: process (pid: 72914) is no longer running 00:06:00.072 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (72914) - No such process 00:06:00.072 18:37:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:00.072 18:37:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:00.072 18:37:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:00.072 18:37:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:00.072 18:37:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:00.072 18:37:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:00.072 18:37:00 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 72898 00:06:00.072 18:37:00 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 72898 00:06:00.072 18:37:00 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:00.332 18:37:00 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 72898 00:06:00.332 18:37:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 72898 ']' 00:06:00.332 18:37:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 72898 00:06:00.332 18:37:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:00.332 18:37:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:00.332 18:37:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72898 00:06:00.332 18:37:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:00.332 18:37:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:00.332 18:37:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72898' 00:06:00.332 killing process with pid 72898 00:06:00.332 18:37:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 72898 00:06:00.332 18:37:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 72898 00:06:00.902 00:06:00.902 real 0m2.578s 00:06:00.902 user 0m2.634s 00:06:00.902 sys 0m0.830s 00:06:00.902 18:37:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:00.902 18:37:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:00.902 ************************************ 00:06:00.902 END TEST locking_app_on_locked_coremask 00:06:00.902 ************************************ 00:06:00.902 18:37:01 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:00.902 18:37:01 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:00.902 18:37:01 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:00.902 18:37:01 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:00.902 ************************************ 00:06:00.902 START TEST locking_overlapped_coremask 00:06:00.902 ************************************ 00:06:00.902 18:37:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:06:00.902 18:37:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=72956 00:06:00.902 18:37:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:00.902 18:37:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 72956 /var/tmp/spdk.sock 00:06:00.902 18:37:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 72956 ']' 00:06:00.902 18:37:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:00.902 18:37:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:00.902 18:37:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:00.902 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:00.902 18:37:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:00.902 18:37:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:01.161 [2024-12-15 18:37:01.426850] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:06:01.161 [2024-12-15 18:37:01.427105] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72956 ] 00:06:01.161 [2024-12-15 18:37:01.600037] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:01.420 [2024-12-15 18:37:01.645996] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:01.420 [2024-12-15 18:37:01.646137] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.420 [2024-12-15 18:37:01.646235] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:01.989 18:37:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:01.989 18:37:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:01.989 18:37:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=72974 00:06:01.989 18:37:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:01.989 18:37:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 72974 /var/tmp/spdk2.sock 00:06:01.989 18:37:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:01.989 18:37:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 72974 /var/tmp/spdk2.sock 00:06:01.989 18:37:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:01.989 18:37:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:01.989 18:37:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:01.989 18:37:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:01.989 18:37:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 72974 /var/tmp/spdk2.sock 00:06:01.989 18:37:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 72974 ']' 00:06:01.989 18:37:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:01.989 18:37:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:01.989 18:37:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:01.989 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:01.989 18:37:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:01.989 18:37:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:01.989 [2024-12-15 18:37:02.333561] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:06:01.989 [2024-12-15 18:37:02.333787] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72974 ] 00:06:02.249 [2024-12-15 18:37:02.498762] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 72956 has claimed it. 00:06:02.249 [2024-12-15 18:37:02.498845] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:02.507 ERROR: process (pid: 72974) is no longer running 00:06:02.507 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (72974) - No such process 00:06:02.507 18:37:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:02.507 18:37:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:02.507 18:37:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:02.507 18:37:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:02.507 18:37:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:02.507 18:37:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:02.507 18:37:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:02.507 18:37:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:02.507 18:37:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:02.507 18:37:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:02.507 18:37:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 72956 00:06:02.507 18:37:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 72956 ']' 00:06:02.507 18:37:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 72956 00:06:02.507 18:37:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:06:02.507 18:37:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:02.766 18:37:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72956 00:06:02.766 18:37:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:02.766 18:37:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:02.766 18:37:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72956' 00:06:02.766 killing process with pid 72956 00:06:02.766 18:37:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 72956 00:06:02.766 18:37:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 72956 00:06:03.346 00:06:03.346 real 0m2.274s 00:06:03.346 user 0m5.903s 00:06:03.346 sys 0m0.664s 00:06:03.346 18:37:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:03.346 18:37:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:03.346 ************************************ 00:06:03.346 END TEST locking_overlapped_coremask 00:06:03.346 ************************************ 00:06:03.346 18:37:03 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:03.346 18:37:03 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:03.346 18:37:03 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:03.346 18:37:03 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:03.346 ************************************ 00:06:03.346 START TEST locking_overlapped_coremask_via_rpc 00:06:03.346 ************************************ 00:06:03.346 18:37:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:06:03.346 18:37:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=73022 00:06:03.346 18:37:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:03.346 18:37:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 73022 /var/tmp/spdk.sock 00:06:03.346 18:37:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 73022 ']' 00:06:03.346 18:37:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:03.346 18:37:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:03.346 18:37:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:03.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:03.347 18:37:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:03.347 18:37:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:03.606 [2024-12-15 18:37:03.786288] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:06:03.606 [2024-12-15 18:37:03.786534] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73022 ] 00:06:03.606 [2024-12-15 18:37:03.956407] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:03.606 [2024-12-15 18:37:03.956485] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:03.606 [2024-12-15 18:37:04.002418] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:03.606 [2024-12-15 18:37:04.002518] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.606 [2024-12-15 18:37:04.002616] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:04.175 18:37:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:04.175 18:37:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:04.175 18:37:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=73034 00:06:04.175 18:37:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:04.175 18:37:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 73034 /var/tmp/spdk2.sock 00:06:04.175 18:37:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 73034 ']' 00:06:04.175 18:37:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:04.175 18:37:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:04.175 18:37:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:04.175 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:04.175 18:37:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:04.175 18:37:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:04.435 [2024-12-15 18:37:04.683649] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:06:04.435 [2024-12-15 18:37:04.683878] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73034 ] 00:06:04.435 [2024-12-15 18:37:04.851424] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:04.435 [2024-12-15 18:37:04.851477] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:04.693 [2024-12-15 18:37:04.913945] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:06:04.693 [2024-12-15 18:37:04.916899] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:04.693 [2024-12-15 18:37:04.917016] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:06:05.259 18:37:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:05.259 18:37:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:05.259 18:37:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:05.259 18:37:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:05.259 18:37:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:05.259 18:37:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:05.259 18:37:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:05.259 18:37:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:05.259 18:37:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:05.259 18:37:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:05.259 18:37:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:05.259 18:37:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:05.259 18:37:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:05.259 18:37:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:05.259 18:37:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:05.259 18:37:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:05.259 [2024-12-15 18:37:05.556028] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 73022 has claimed it. 00:06:05.259 request: 00:06:05.259 { 00:06:05.259 "method": "framework_enable_cpumask_locks", 00:06:05.259 "req_id": 1 00:06:05.259 } 00:06:05.259 Got JSON-RPC error response 00:06:05.259 response: 00:06:05.259 { 00:06:05.259 "code": -32603, 00:06:05.259 "message": "Failed to claim CPU core: 2" 00:06:05.259 } 00:06:05.259 18:37:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:05.259 18:37:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:05.259 18:37:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:05.259 18:37:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:05.259 18:37:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:05.259 18:37:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 73022 /var/tmp/spdk.sock 00:06:05.259 18:37:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 73022 ']' 00:06:05.259 18:37:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:05.259 18:37:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:05.259 18:37:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:05.259 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:05.259 18:37:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:05.259 18:37:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:05.518 18:37:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:05.518 18:37:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:05.518 18:37:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 73034 /var/tmp/spdk2.sock 00:06:05.518 18:37:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 73034 ']' 00:06:05.518 18:37:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:05.518 18:37:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:05.518 18:37:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:05.518 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:05.518 18:37:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:05.518 18:37:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:05.776 18:37:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:05.776 18:37:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:05.776 18:37:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:05.776 18:37:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:05.776 18:37:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:05.776 18:37:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:05.776 00:06:05.776 real 0m2.313s 00:06:05.776 user 0m1.066s 00:06:05.776 sys 0m0.177s 00:06:05.776 18:37:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:05.776 18:37:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:05.776 ************************************ 00:06:05.776 END TEST locking_overlapped_coremask_via_rpc 00:06:05.776 ************************************ 00:06:05.776 18:37:06 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:05.776 18:37:06 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 73022 ]] 00:06:05.776 18:37:06 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 73022 00:06:05.776 18:37:06 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 73022 ']' 00:06:05.776 18:37:06 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 73022 00:06:05.776 18:37:06 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:05.776 18:37:06 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:05.776 18:37:06 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73022 00:06:05.776 killing process with pid 73022 00:06:05.776 18:37:06 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:05.776 18:37:06 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:05.776 18:37:06 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73022' 00:06:05.776 18:37:06 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 73022 00:06:05.776 18:37:06 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 73022 00:06:06.344 18:37:06 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 73034 ]] 00:06:06.344 18:37:06 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 73034 00:06:06.344 18:37:06 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 73034 ']' 00:06:06.344 18:37:06 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 73034 00:06:06.344 18:37:06 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:06.344 18:37:06 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:06.344 18:37:06 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73034 00:06:06.344 killing process with pid 73034 00:06:06.344 18:37:06 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:06.344 18:37:06 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:06.344 18:37:06 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73034' 00:06:06.344 18:37:06 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 73034 00:06:06.344 18:37:06 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 73034 00:06:06.913 18:37:07 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:06.913 Process with pid 73022 is not found 00:06:06.913 18:37:07 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:06.913 18:37:07 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 73022 ]] 00:06:06.913 18:37:07 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 73022 00:06:06.913 18:37:07 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 73022 ']' 00:06:06.913 18:37:07 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 73022 00:06:06.913 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (73022) - No such process 00:06:06.913 18:37:07 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 73022 is not found' 00:06:06.913 18:37:07 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 73034 ]] 00:06:06.913 18:37:07 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 73034 00:06:06.913 18:37:07 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 73034 ']' 00:06:06.913 18:37:07 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 73034 00:06:06.913 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (73034) - No such process 00:06:06.913 18:37:07 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 73034 is not found' 00:06:06.913 Process with pid 73034 is not found 00:06:06.913 18:37:07 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:06.913 00:06:06.913 real 0m20.308s 00:06:06.913 user 0m33.588s 00:06:06.913 sys 0m6.533s 00:06:06.913 18:37:07 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:06.913 18:37:07 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:06.913 ************************************ 00:06:06.913 END TEST cpu_locks 00:06:06.913 ************************************ 00:06:06.913 00:06:06.913 real 0m48.453s 00:06:06.913 user 1m32.103s 00:06:06.913 sys 0m10.327s 00:06:06.913 18:37:07 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:06.913 18:37:07 event -- common/autotest_common.sh@10 -- # set +x 00:06:06.913 ************************************ 00:06:06.913 END TEST event 00:06:06.913 ************************************ 00:06:06.913 18:37:07 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:06.913 18:37:07 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:06.913 18:37:07 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:06.913 18:37:07 -- common/autotest_common.sh@10 -- # set +x 00:06:06.913 ************************************ 00:06:06.913 START TEST thread 00:06:06.913 ************************************ 00:06:06.913 18:37:07 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:07.174 * Looking for test storage... 00:06:07.174 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:07.174 18:37:07 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:07.174 18:37:07 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:06:07.174 18:37:07 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:07.174 18:37:07 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:07.174 18:37:07 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:07.174 18:37:07 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:07.174 18:37:07 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:07.174 18:37:07 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:07.174 18:37:07 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:07.174 18:37:07 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:07.174 18:37:07 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:07.174 18:37:07 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:07.174 18:37:07 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:07.174 18:37:07 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:07.174 18:37:07 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:07.174 18:37:07 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:07.174 18:37:07 thread -- scripts/common.sh@345 -- # : 1 00:06:07.174 18:37:07 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:07.174 18:37:07 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:07.174 18:37:07 thread -- scripts/common.sh@365 -- # decimal 1 00:06:07.174 18:37:07 thread -- scripts/common.sh@353 -- # local d=1 00:06:07.174 18:37:07 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:07.174 18:37:07 thread -- scripts/common.sh@355 -- # echo 1 00:06:07.174 18:37:07 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:07.174 18:37:07 thread -- scripts/common.sh@366 -- # decimal 2 00:06:07.174 18:37:07 thread -- scripts/common.sh@353 -- # local d=2 00:06:07.174 18:37:07 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:07.174 18:37:07 thread -- scripts/common.sh@355 -- # echo 2 00:06:07.174 18:37:07 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:07.174 18:37:07 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:07.174 18:37:07 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:07.174 18:37:07 thread -- scripts/common.sh@368 -- # return 0 00:06:07.174 18:37:07 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:07.174 18:37:07 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:07.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.174 --rc genhtml_branch_coverage=1 00:06:07.174 --rc genhtml_function_coverage=1 00:06:07.174 --rc genhtml_legend=1 00:06:07.174 --rc geninfo_all_blocks=1 00:06:07.174 --rc geninfo_unexecuted_blocks=1 00:06:07.174 00:06:07.174 ' 00:06:07.174 18:37:07 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:07.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.174 --rc genhtml_branch_coverage=1 00:06:07.174 --rc genhtml_function_coverage=1 00:06:07.174 --rc genhtml_legend=1 00:06:07.174 --rc geninfo_all_blocks=1 00:06:07.174 --rc geninfo_unexecuted_blocks=1 00:06:07.174 00:06:07.174 ' 00:06:07.174 18:37:07 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:07.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.174 --rc genhtml_branch_coverage=1 00:06:07.174 --rc genhtml_function_coverage=1 00:06:07.174 --rc genhtml_legend=1 00:06:07.174 --rc geninfo_all_blocks=1 00:06:07.174 --rc geninfo_unexecuted_blocks=1 00:06:07.174 00:06:07.174 ' 00:06:07.174 18:37:07 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:07.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.174 --rc genhtml_branch_coverage=1 00:06:07.174 --rc genhtml_function_coverage=1 00:06:07.174 --rc genhtml_legend=1 00:06:07.174 --rc geninfo_all_blocks=1 00:06:07.174 --rc geninfo_unexecuted_blocks=1 00:06:07.174 00:06:07.174 ' 00:06:07.174 18:37:07 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:07.174 18:37:07 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:07.174 18:37:07 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:07.174 18:37:07 thread -- common/autotest_common.sh@10 -- # set +x 00:06:07.174 ************************************ 00:06:07.174 START TEST thread_poller_perf 00:06:07.174 ************************************ 00:06:07.174 18:37:07 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:07.174 [2024-12-15 18:37:07.562054] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:06:07.174 [2024-12-15 18:37:07.562229] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73174 ] 00:06:07.434 [2024-12-15 18:37:07.731322] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.434 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:07.434 [2024-12-15 18:37:07.774065] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.816 [2024-12-15T18:37:09.257Z] ====================================== 00:06:08.816 [2024-12-15T18:37:09.257Z] busy:2301188968 (cyc) 00:06:08.816 [2024-12-15T18:37:09.257Z] total_run_count: 412000 00:06:08.816 [2024-12-15T18:37:09.257Z] tsc_hz: 2290000000 (cyc) 00:06:08.816 [2024-12-15T18:37:09.257Z] ====================================== 00:06:08.816 [2024-12-15T18:37:09.257Z] poller_cost: 5585 (cyc), 2438 (nsec) 00:06:08.816 00:06:08.816 real 0m1.351s 00:06:08.816 user 0m1.138s 00:06:08.816 sys 0m0.108s 00:06:08.816 18:37:08 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:08.816 18:37:08 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:08.816 ************************************ 00:06:08.816 END TEST thread_poller_perf 00:06:08.816 ************************************ 00:06:08.816 18:37:08 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:08.816 18:37:08 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:08.816 18:37:08 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:08.816 18:37:08 thread -- common/autotest_common.sh@10 -- # set +x 00:06:08.816 ************************************ 00:06:08.816 START TEST thread_poller_perf 00:06:08.816 ************************************ 00:06:08.816 18:37:08 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:08.816 [2024-12-15 18:37:08.977552] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:06:08.816 [2024-12-15 18:37:08.977774] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73211 ] 00:06:08.816 [2024-12-15 18:37:09.144038] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.816 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:08.816 [2024-12-15 18:37:09.181542] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.195 [2024-12-15T18:37:10.636Z] ====================================== 00:06:10.195 [2024-12-15T18:37:10.636Z] busy:2293437342 (cyc) 00:06:10.195 [2024-12-15T18:37:10.636Z] total_run_count: 5010000 00:06:10.195 [2024-12-15T18:37:10.636Z] tsc_hz: 2290000000 (cyc) 00:06:10.195 [2024-12-15T18:37:10.636Z] ====================================== 00:06:10.195 [2024-12-15T18:37:10.637Z] poller_cost: 457 (cyc), 199 (nsec) 00:06:10.196 00:06:10.196 real 0m1.338s 00:06:10.196 user 0m1.129s 00:06:10.196 sys 0m0.103s 00:06:10.196 ************************************ 00:06:10.196 END TEST thread_poller_perf 00:06:10.196 ************************************ 00:06:10.196 18:37:10 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:10.196 18:37:10 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:10.196 18:37:10 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:10.196 ************************************ 00:06:10.196 END TEST thread 00:06:10.196 ************************************ 00:06:10.196 00:06:10.196 real 0m3.050s 00:06:10.196 user 0m2.432s 00:06:10.196 sys 0m0.420s 00:06:10.196 18:37:10 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:10.196 18:37:10 thread -- common/autotest_common.sh@10 -- # set +x 00:06:10.196 18:37:10 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:10.196 18:37:10 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:10.196 18:37:10 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:10.196 18:37:10 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:10.196 18:37:10 -- common/autotest_common.sh@10 -- # set +x 00:06:10.196 ************************************ 00:06:10.196 START TEST app_cmdline 00:06:10.196 ************************************ 00:06:10.196 18:37:10 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:10.196 * Looking for test storage... 00:06:10.196 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:10.196 18:37:10 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:10.196 18:37:10 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:06:10.196 18:37:10 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:10.196 18:37:10 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:10.196 18:37:10 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:10.196 18:37:10 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:10.196 18:37:10 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:10.196 18:37:10 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:10.196 18:37:10 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:10.196 18:37:10 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:10.196 18:37:10 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:10.196 18:37:10 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:10.196 18:37:10 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:10.196 18:37:10 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:10.196 18:37:10 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:10.196 18:37:10 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:10.196 18:37:10 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:10.196 18:37:10 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:10.196 18:37:10 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:10.196 18:37:10 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:10.196 18:37:10 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:10.196 18:37:10 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:10.196 18:37:10 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:10.196 18:37:10 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:10.196 18:37:10 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:10.196 18:37:10 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:10.196 18:37:10 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:10.196 18:37:10 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:10.196 18:37:10 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:10.196 18:37:10 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:10.196 18:37:10 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:10.196 18:37:10 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:10.196 18:37:10 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:10.196 18:37:10 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:10.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.196 --rc genhtml_branch_coverage=1 00:06:10.196 --rc genhtml_function_coverage=1 00:06:10.196 --rc genhtml_legend=1 00:06:10.196 --rc geninfo_all_blocks=1 00:06:10.196 --rc geninfo_unexecuted_blocks=1 00:06:10.196 00:06:10.196 ' 00:06:10.196 18:37:10 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:10.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.196 --rc genhtml_branch_coverage=1 00:06:10.196 --rc genhtml_function_coverage=1 00:06:10.196 --rc genhtml_legend=1 00:06:10.196 --rc geninfo_all_blocks=1 00:06:10.196 --rc geninfo_unexecuted_blocks=1 00:06:10.196 00:06:10.196 ' 00:06:10.196 18:37:10 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:10.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.196 --rc genhtml_branch_coverage=1 00:06:10.196 --rc genhtml_function_coverage=1 00:06:10.196 --rc genhtml_legend=1 00:06:10.196 --rc geninfo_all_blocks=1 00:06:10.196 --rc geninfo_unexecuted_blocks=1 00:06:10.196 00:06:10.196 ' 00:06:10.196 18:37:10 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:10.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.196 --rc genhtml_branch_coverage=1 00:06:10.196 --rc genhtml_function_coverage=1 00:06:10.196 --rc genhtml_legend=1 00:06:10.196 --rc geninfo_all_blocks=1 00:06:10.196 --rc geninfo_unexecuted_blocks=1 00:06:10.196 00:06:10.196 ' 00:06:10.196 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:10.196 18:37:10 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:10.196 18:37:10 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=73294 00:06:10.196 18:37:10 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:10.196 18:37:10 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 73294 00:06:10.196 18:37:10 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 73294 ']' 00:06:10.196 18:37:10 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:10.196 18:37:10 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:10.196 18:37:10 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:10.196 18:37:10 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:10.196 18:37:10 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:10.455 [2024-12-15 18:37:10.711120] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:06:10.455 [2024-12-15 18:37:10.711343] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73294 ] 00:06:10.455 [2024-12-15 18:37:10.863728] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.714 [2024-12-15 18:37:10.902973] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.283 18:37:11 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:11.283 18:37:11 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:06:11.283 18:37:11 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:11.283 { 00:06:11.283 "version": "SPDK v25.01-pre git sha1 e01cb43b8", 00:06:11.283 "fields": { 00:06:11.283 "major": 25, 00:06:11.283 "minor": 1, 00:06:11.283 "patch": 0, 00:06:11.283 "suffix": "-pre", 00:06:11.283 "commit": "e01cb43b8" 00:06:11.283 } 00:06:11.283 } 00:06:11.542 18:37:11 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:11.542 18:37:11 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:11.542 18:37:11 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:11.542 18:37:11 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:11.542 18:37:11 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:11.542 18:37:11 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:11.542 18:37:11 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:11.542 18:37:11 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:11.542 18:37:11 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:11.542 18:37:11 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:11.542 18:37:11 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:11.543 18:37:11 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:11.543 18:37:11 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:11.543 18:37:11 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:06:11.543 18:37:11 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:11.543 18:37:11 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:11.543 18:37:11 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:11.543 18:37:11 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:11.543 18:37:11 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:11.543 18:37:11 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:11.543 18:37:11 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:11.543 18:37:11 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:11.543 18:37:11 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:11.543 18:37:11 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:11.543 request: 00:06:11.543 { 00:06:11.543 "method": "env_dpdk_get_mem_stats", 00:06:11.543 "req_id": 1 00:06:11.543 } 00:06:11.543 Got JSON-RPC error response 00:06:11.543 response: 00:06:11.543 { 00:06:11.543 "code": -32601, 00:06:11.543 "message": "Method not found" 00:06:11.543 } 00:06:11.803 18:37:11 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:06:11.803 18:37:11 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:11.803 18:37:11 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:11.803 18:37:11 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:11.803 18:37:11 app_cmdline -- app/cmdline.sh@1 -- # killprocess 73294 00:06:11.803 18:37:11 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 73294 ']' 00:06:11.803 18:37:11 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 73294 00:06:11.803 18:37:11 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:06:11.803 18:37:11 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:11.803 18:37:11 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73294 00:06:11.803 killing process with pid 73294 00:06:11.803 18:37:12 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:11.803 18:37:12 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:11.803 18:37:12 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73294' 00:06:11.803 18:37:12 app_cmdline -- common/autotest_common.sh@973 -- # kill 73294 00:06:11.803 18:37:12 app_cmdline -- common/autotest_common.sh@978 -- # wait 73294 00:06:12.371 ************************************ 00:06:12.372 END TEST app_cmdline 00:06:12.372 ************************************ 00:06:12.372 00:06:12.372 real 0m2.248s 00:06:12.372 user 0m2.342s 00:06:12.372 sys 0m0.695s 00:06:12.372 18:37:12 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:12.372 18:37:12 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:12.372 18:37:12 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:12.372 18:37:12 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:12.372 18:37:12 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:12.372 18:37:12 -- common/autotest_common.sh@10 -- # set +x 00:06:12.372 ************************************ 00:06:12.372 START TEST version 00:06:12.372 ************************************ 00:06:12.372 18:37:12 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:12.631 * Looking for test storage... 00:06:12.631 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:12.631 18:37:12 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:12.631 18:37:12 version -- common/autotest_common.sh@1711 -- # lcov --version 00:06:12.631 18:37:12 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:12.631 18:37:12 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:12.631 18:37:12 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:12.631 18:37:12 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:12.631 18:37:12 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:12.631 18:37:12 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:12.631 18:37:12 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:12.631 18:37:12 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:12.631 18:37:12 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:12.631 18:37:12 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:12.631 18:37:12 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:12.631 18:37:12 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:12.631 18:37:12 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:12.631 18:37:12 version -- scripts/common.sh@344 -- # case "$op" in 00:06:12.631 18:37:12 version -- scripts/common.sh@345 -- # : 1 00:06:12.631 18:37:12 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:12.631 18:37:12 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:12.631 18:37:12 version -- scripts/common.sh@365 -- # decimal 1 00:06:12.631 18:37:12 version -- scripts/common.sh@353 -- # local d=1 00:06:12.631 18:37:12 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:12.631 18:37:12 version -- scripts/common.sh@355 -- # echo 1 00:06:12.631 18:37:12 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:12.631 18:37:12 version -- scripts/common.sh@366 -- # decimal 2 00:06:12.631 18:37:12 version -- scripts/common.sh@353 -- # local d=2 00:06:12.631 18:37:12 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:12.631 18:37:12 version -- scripts/common.sh@355 -- # echo 2 00:06:12.631 18:37:12 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:12.631 18:37:12 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:12.631 18:37:12 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:12.631 18:37:12 version -- scripts/common.sh@368 -- # return 0 00:06:12.631 18:37:12 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:12.631 18:37:12 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:12.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.631 --rc genhtml_branch_coverage=1 00:06:12.631 --rc genhtml_function_coverage=1 00:06:12.631 --rc genhtml_legend=1 00:06:12.631 --rc geninfo_all_blocks=1 00:06:12.631 --rc geninfo_unexecuted_blocks=1 00:06:12.631 00:06:12.631 ' 00:06:12.631 18:37:12 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:12.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.631 --rc genhtml_branch_coverage=1 00:06:12.631 --rc genhtml_function_coverage=1 00:06:12.631 --rc genhtml_legend=1 00:06:12.631 --rc geninfo_all_blocks=1 00:06:12.631 --rc geninfo_unexecuted_blocks=1 00:06:12.631 00:06:12.631 ' 00:06:12.631 18:37:12 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:12.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.631 --rc genhtml_branch_coverage=1 00:06:12.631 --rc genhtml_function_coverage=1 00:06:12.631 --rc genhtml_legend=1 00:06:12.631 --rc geninfo_all_blocks=1 00:06:12.631 --rc geninfo_unexecuted_blocks=1 00:06:12.631 00:06:12.631 ' 00:06:12.631 18:37:12 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:12.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.631 --rc genhtml_branch_coverage=1 00:06:12.631 --rc genhtml_function_coverage=1 00:06:12.631 --rc genhtml_legend=1 00:06:12.631 --rc geninfo_all_blocks=1 00:06:12.632 --rc geninfo_unexecuted_blocks=1 00:06:12.632 00:06:12.632 ' 00:06:12.632 18:37:12 version -- app/version.sh@17 -- # get_header_version major 00:06:12.632 18:37:12 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:12.632 18:37:12 version -- app/version.sh@14 -- # cut -f2 00:06:12.632 18:37:12 version -- app/version.sh@14 -- # tr -d '"' 00:06:12.632 18:37:12 version -- app/version.sh@17 -- # major=25 00:06:12.632 18:37:12 version -- app/version.sh@18 -- # get_header_version minor 00:06:12.632 18:37:12 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:12.632 18:37:12 version -- app/version.sh@14 -- # cut -f2 00:06:12.632 18:37:12 version -- app/version.sh@14 -- # tr -d '"' 00:06:12.632 18:37:12 version -- app/version.sh@18 -- # minor=1 00:06:12.632 18:37:12 version -- app/version.sh@19 -- # get_header_version patch 00:06:12.632 18:37:12 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:12.632 18:37:12 version -- app/version.sh@14 -- # cut -f2 00:06:12.632 18:37:12 version -- app/version.sh@14 -- # tr -d '"' 00:06:12.632 18:37:12 version -- app/version.sh@19 -- # patch=0 00:06:12.632 18:37:12 version -- app/version.sh@20 -- # get_header_version suffix 00:06:12.632 18:37:12 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:12.632 18:37:12 version -- app/version.sh@14 -- # cut -f2 00:06:12.632 18:37:12 version -- app/version.sh@14 -- # tr -d '"' 00:06:12.632 18:37:12 version -- app/version.sh@20 -- # suffix=-pre 00:06:12.632 18:37:12 version -- app/version.sh@22 -- # version=25.1 00:06:12.632 18:37:12 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:12.632 18:37:12 version -- app/version.sh@28 -- # version=25.1rc0 00:06:12.632 18:37:12 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:12.632 18:37:12 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:12.632 18:37:13 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:12.632 18:37:13 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:12.632 ************************************ 00:06:12.632 END TEST version 00:06:12.632 ************************************ 00:06:12.632 00:06:12.632 real 0m0.328s 00:06:12.632 user 0m0.191s 00:06:12.632 sys 0m0.197s 00:06:12.632 18:37:13 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:12.632 18:37:13 version -- common/autotest_common.sh@10 -- # set +x 00:06:12.893 18:37:13 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:12.893 18:37:13 -- spdk/autotest.sh@188 -- # [[ 1 -eq 1 ]] 00:06:12.893 18:37:13 -- spdk/autotest.sh@189 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:06:12.893 18:37:13 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:12.893 18:37:13 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:12.893 18:37:13 -- common/autotest_common.sh@10 -- # set +x 00:06:12.893 ************************************ 00:06:12.893 START TEST bdev_raid 00:06:12.893 ************************************ 00:06:12.893 18:37:13 bdev_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:06:12.893 * Looking for test storage... 00:06:12.893 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:12.893 18:37:13 bdev_raid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:12.894 18:37:13 bdev_raid -- common/autotest_common.sh@1711 -- # lcov --version 00:06:12.894 18:37:13 bdev_raid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:12.894 18:37:13 bdev_raid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:12.894 18:37:13 bdev_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:12.894 18:37:13 bdev_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:12.894 18:37:13 bdev_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:12.894 18:37:13 bdev_raid -- scripts/common.sh@336 -- # IFS=.-: 00:06:12.894 18:37:13 bdev_raid -- scripts/common.sh@336 -- # read -ra ver1 00:06:12.894 18:37:13 bdev_raid -- scripts/common.sh@337 -- # IFS=.-: 00:06:12.894 18:37:13 bdev_raid -- scripts/common.sh@337 -- # read -ra ver2 00:06:12.894 18:37:13 bdev_raid -- scripts/common.sh@338 -- # local 'op=<' 00:06:12.894 18:37:13 bdev_raid -- scripts/common.sh@340 -- # ver1_l=2 00:06:12.894 18:37:13 bdev_raid -- scripts/common.sh@341 -- # ver2_l=1 00:06:12.894 18:37:13 bdev_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:12.894 18:37:13 bdev_raid -- scripts/common.sh@344 -- # case "$op" in 00:06:12.894 18:37:13 bdev_raid -- scripts/common.sh@345 -- # : 1 00:06:12.894 18:37:13 bdev_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:12.894 18:37:13 bdev_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:12.894 18:37:13 bdev_raid -- scripts/common.sh@365 -- # decimal 1 00:06:12.894 18:37:13 bdev_raid -- scripts/common.sh@353 -- # local d=1 00:06:12.894 18:37:13 bdev_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:12.894 18:37:13 bdev_raid -- scripts/common.sh@355 -- # echo 1 00:06:12.894 18:37:13 bdev_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:06:12.894 18:37:13 bdev_raid -- scripts/common.sh@366 -- # decimal 2 00:06:12.894 18:37:13 bdev_raid -- scripts/common.sh@353 -- # local d=2 00:06:12.894 18:37:13 bdev_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:12.894 18:37:13 bdev_raid -- scripts/common.sh@355 -- # echo 2 00:06:12.894 18:37:13 bdev_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:06:12.894 18:37:13 bdev_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:12.894 18:37:13 bdev_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:12.894 18:37:13 bdev_raid -- scripts/common.sh@368 -- # return 0 00:06:12.894 18:37:13 bdev_raid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:12.894 18:37:13 bdev_raid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:12.894 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.894 --rc genhtml_branch_coverage=1 00:06:12.894 --rc genhtml_function_coverage=1 00:06:12.894 --rc genhtml_legend=1 00:06:12.894 --rc geninfo_all_blocks=1 00:06:12.894 --rc geninfo_unexecuted_blocks=1 00:06:12.894 00:06:12.894 ' 00:06:12.894 18:37:13 bdev_raid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:12.894 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.894 --rc genhtml_branch_coverage=1 00:06:12.894 --rc genhtml_function_coverage=1 00:06:12.894 --rc genhtml_legend=1 00:06:12.894 --rc geninfo_all_blocks=1 00:06:12.894 --rc geninfo_unexecuted_blocks=1 00:06:12.894 00:06:12.894 ' 00:06:12.894 18:37:13 bdev_raid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:12.894 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.894 --rc genhtml_branch_coverage=1 00:06:12.894 --rc genhtml_function_coverage=1 00:06:12.894 --rc genhtml_legend=1 00:06:12.894 --rc geninfo_all_blocks=1 00:06:12.894 --rc geninfo_unexecuted_blocks=1 00:06:12.894 00:06:12.894 ' 00:06:12.894 18:37:13 bdev_raid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:12.894 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.894 --rc genhtml_branch_coverage=1 00:06:12.894 --rc genhtml_function_coverage=1 00:06:12.894 --rc genhtml_legend=1 00:06:12.894 --rc geninfo_all_blocks=1 00:06:12.894 --rc geninfo_unexecuted_blocks=1 00:06:12.894 00:06:12.894 ' 00:06:12.894 18:37:13 bdev_raid -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:12.894 18:37:13 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:06:12.894 18:37:13 bdev_raid -- bdev/bdev_raid.sh@14 -- # rpc_py=rpc_cmd 00:06:13.163 18:37:13 bdev_raid -- bdev/bdev_raid.sh@946 -- # mkdir -p /raidtest 00:06:13.163 18:37:13 bdev_raid -- bdev/bdev_raid.sh@947 -- # trap 'cleanup; exit 1' EXIT 00:06:13.163 18:37:13 bdev_raid -- bdev/bdev_raid.sh@949 -- # base_blocklen=512 00:06:13.163 18:37:13 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid1_resize_data_offset_test raid_resize_data_offset_test 00:06:13.163 18:37:13 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:13.163 18:37:13 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:13.163 18:37:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:13.163 ************************************ 00:06:13.163 START TEST raid1_resize_data_offset_test 00:06:13.163 ************************************ 00:06:13.163 Process raid pid: 73465 00:06:13.163 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:13.163 18:37:13 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1129 -- # raid_resize_data_offset_test 00:06:13.163 18:37:13 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@917 -- # raid_pid=73465 00:06:13.163 18:37:13 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@918 -- # echo 'Process raid pid: 73465' 00:06:13.163 18:37:13 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@919 -- # waitforlisten 73465 00:06:13.163 18:37:13 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@835 -- # '[' -z 73465 ']' 00:06:13.163 18:37:13 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:13.163 18:37:13 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:13.163 18:37:13 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:13.163 18:37:13 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:13.163 18:37:13 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:13.163 18:37:13 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@916 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:13.163 [2024-12-15 18:37:13.433789] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:06:13.163 [2024-12-15 18:37:13.433932] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:13.434 [2024-12-15 18:37:13.606964] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.434 [2024-12-15 18:37:13.647173] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.434 [2024-12-15 18:37:13.724020] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:13.434 [2024-12-15 18:37:13.724061] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:14.004 18:37:14 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:14.004 18:37:14 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@868 -- # return 0 00:06:14.004 18:37:14 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@922 -- # rpc_cmd bdev_malloc_create -b malloc0 64 512 -o 16 00:06:14.004 18:37:14 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:14.004 18:37:14 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:14.004 malloc0 00:06:14.004 18:37:14 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:14.004 18:37:14 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@923 -- # rpc_cmd bdev_malloc_create -b malloc1 64 512 -o 16 00:06:14.004 18:37:14 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:14.004 18:37:14 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:14.004 malloc1 00:06:14.004 18:37:14 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:14.004 18:37:14 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@924 -- # rpc_cmd bdev_null_create null0 64 512 00:06:14.004 18:37:14 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:14.004 18:37:14 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:14.004 null0 00:06:14.004 18:37:14 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:14.004 18:37:14 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@926 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''malloc0 malloc1 null0'\''' -s 00:06:14.004 18:37:14 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:14.004 18:37:14 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:14.004 [2024-12-15 18:37:14.360347] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc0 is claimed 00:06:14.004 [2024-12-15 18:37:14.362463] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:06:14.004 [2024-12-15 18:37:14.362516] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev null0 is claimed 00:06:14.004 [2024-12-15 18:37:14.362650] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:06:14.004 [2024-12-15 18:37:14.362662] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 129024, blocklen 512 00:06:14.004 [2024-12-15 18:37:14.362949] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:06:14.004 [2024-12-15 18:37:14.363101] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:06:14.004 [2024-12-15 18:37:14.363123] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000006280 00:06:14.004 [2024-12-15 18:37:14.363281] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:14.004 18:37:14 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:14.004 18:37:14 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:14.004 18:37:14 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:06:14.004 18:37:14 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:14.004 18:37:14 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:14.004 18:37:14 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:14.004 18:37:14 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # (( 2048 == 2048 )) 00:06:14.004 18:37:14 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@931 -- # rpc_cmd bdev_null_delete null0 00:06:14.004 18:37:14 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:14.004 18:37:14 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:14.004 [2024-12-15 18:37:14.420267] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: null0 00:06:14.004 18:37:14 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:14.004 18:37:14 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@935 -- # rpc_cmd bdev_malloc_create -b malloc2 512 512 -o 30 00:06:14.004 18:37:14 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:14.004 18:37:14 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:14.264 malloc2 00:06:14.264 18:37:14 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:14.264 18:37:14 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@936 -- # rpc_cmd bdev_raid_add_base_bdev Raid malloc2 00:06:14.264 18:37:14 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:14.264 18:37:14 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:14.264 [2024-12-15 18:37:14.636749] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:06:14.264 [2024-12-15 18:37:14.645788] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:06:14.264 18:37:14 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:14.264 [2024-12-15 18:37:14.648070] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev Raid 00:06:14.264 18:37:14 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:14.264 18:37:14 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:06:14.264 18:37:14 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:14.264 18:37:14 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:14.264 18:37:14 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:14.264 18:37:14 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # (( 2070 == 2070 )) 00:06:14.264 18:37:14 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@941 -- # killprocess 73465 00:06:14.264 18:37:14 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@954 -- # '[' -z 73465 ']' 00:06:14.264 18:37:14 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@958 -- # kill -0 73465 00:06:14.264 18:37:14 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # uname 00:06:14.524 18:37:14 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:14.524 18:37:14 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73465 00:06:14.524 killing process with pid 73465 00:06:14.524 18:37:14 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:14.524 18:37:14 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:14.524 18:37:14 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73465' 00:06:14.524 18:37:14 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@973 -- # kill 73465 00:06:14.524 [2024-12-15 18:37:14.742820] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:14.524 18:37:14 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@978 -- # wait 73465 00:06:14.524 [2024-12-15 18:37:14.743991] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev Raid: Operation canceled 00:06:14.524 [2024-12-15 18:37:14.744057] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:14.524 [2024-12-15 18:37:14.744076] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: malloc2 00:06:14.524 [2024-12-15 18:37:14.753149] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:14.524 [2024-12-15 18:37:14.753486] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:14.524 [2024-12-15 18:37:14.753504] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Raid, state offline 00:06:14.787 [2024-12-15 18:37:15.144663] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:15.049 18:37:15 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@943 -- # return 0 00:06:15.049 00:06:15.049 real 0m2.116s 00:06:15.049 user 0m1.928s 00:06:15.049 sys 0m0.624s 00:06:15.049 ************************************ 00:06:15.049 END TEST raid1_resize_data_offset_test 00:06:15.049 ************************************ 00:06:15.049 18:37:15 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:15.049 18:37:15 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:15.309 18:37:15 bdev_raid -- bdev/bdev_raid.sh@953 -- # run_test raid0_resize_superblock_test raid_resize_superblock_test 0 00:06:15.309 18:37:15 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:15.309 18:37:15 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:15.309 18:37:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:15.309 ************************************ 00:06:15.309 START TEST raid0_resize_superblock_test 00:06:15.309 ************************************ 00:06:15.309 18:37:15 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 0 00:06:15.309 18:37:15 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=0 00:06:15.309 18:37:15 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=73521 00:06:15.309 Process raid pid: 73521 00:06:15.309 18:37:15 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:15.309 18:37:15 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 73521' 00:06:15.309 18:37:15 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 73521 00:06:15.309 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:15.309 18:37:15 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 73521 ']' 00:06:15.309 18:37:15 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:15.309 18:37:15 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:15.309 18:37:15 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:15.309 18:37:15 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:15.309 18:37:15 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:15.309 [2024-12-15 18:37:15.620002] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:06:15.309 [2024-12-15 18:37:15.620231] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:15.568 [2024-12-15 18:37:15.791962] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.568 [2024-12-15 18:37:15.830048] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.568 [2024-12-15 18:37:15.905614] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:15.569 [2024-12-15 18:37:15.905655] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:16.137 18:37:16 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:16.137 18:37:16 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:06:16.137 18:37:16 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:06:16.137 18:37:16 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:16.137 18:37:16 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:16.397 malloc0 00:06:16.397 18:37:16 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:16.397 18:37:16 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:16.397 18:37:16 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:16.397 18:37:16 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:16.397 [2024-12-15 18:37:16.655766] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:16.397 [2024-12-15 18:37:16.655869] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:16.397 [2024-12-15 18:37:16.655897] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:06:16.397 [2024-12-15 18:37:16.655909] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:16.397 [2024-12-15 18:37:16.658443] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:16.397 [2024-12-15 18:37:16.658520] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:16.397 pt0 00:06:16.397 18:37:16 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:16.397 18:37:16 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:06:16.397 18:37:16 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:16.397 18:37:16 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:16.397 01dc4e3b-1546-4fcd-b4a5-6396b4e09225 00:06:16.397 18:37:16 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:16.397 18:37:16 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:06:16.397 18:37:16 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:16.397 18:37:16 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:16.656 a9d6f3b9-5c37-4314-9b34-800e6a257c96 00:06:16.656 18:37:16 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:16.657 18:37:16 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:06:16.657 18:37:16 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:16.657 18:37:16 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:16.657 d844ed27-c24b-42d6-9811-cd936ff96eca 00:06:16.657 18:37:16 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:16.657 18:37:16 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:06:16.657 18:37:16 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@870 -- # rpc_cmd bdev_raid_create -n Raid -r 0 -z 64 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:06:16.657 18:37:16 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:16.657 18:37:16 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:16.657 [2024-12-15 18:37:16.863874] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev a9d6f3b9-5c37-4314-9b34-800e6a257c96 is claimed 00:06:16.657 [2024-12-15 18:37:16.863981] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev d844ed27-c24b-42d6-9811-cd936ff96eca is claimed 00:06:16.657 [2024-12-15 18:37:16.864096] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:06:16.657 [2024-12-15 18:37:16.864109] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 245760, blocklen 512 00:06:16.657 [2024-12-15 18:37:16.864385] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:06:16.657 [2024-12-15 18:37:16.864572] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:06:16.657 [2024-12-15 18:37:16.864582] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000006280 00:06:16.657 [2024-12-15 18:37:16.864728] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:16.657 18:37:16 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:16.657 18:37:16 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:16.657 18:37:16 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:06:16.657 18:37:16 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:16.657 18:37:16 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:16.657 18:37:16 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:16.657 18:37:16 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:06:16.657 18:37:16 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:16.657 18:37:16 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:06:16.657 18:37:16 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:16.657 18:37:16 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:16.657 18:37:16 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:16.657 18:37:16 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:06:16.657 18:37:16 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:16.657 18:37:16 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:16.657 18:37:16 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:16.657 18:37:16 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:16.657 18:37:16 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:16.657 18:37:16 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # jq '.[].num_blocks' 00:06:16.657 [2024-12-15 18:37:16.979852] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:16.657 18:37:16 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:16.657 18:37:16 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:16.657 18:37:16 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:16.657 18:37:16 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # (( 245760 == 245760 )) 00:06:16.657 18:37:16 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:06:16.657 18:37:16 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:16.657 18:37:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:16.657 [2024-12-15 18:37:17.007772] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:16.657 [2024-12-15 18:37:17.007860] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'a9d6f3b9-5c37-4314-9b34-800e6a257c96' was resized: old size 131072, new size 204800 00:06:16.657 18:37:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:16.657 18:37:17 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:06:16.657 18:37:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:16.657 18:37:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:16.657 [2024-12-15 18:37:17.019655] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:16.657 [2024-12-15 18:37:17.019721] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'd844ed27-c24b-42d6-9811-cd936ff96eca' was resized: old size 131072, new size 204800 00:06:16.657 [2024-12-15 18:37:17.019750] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 245760 to 393216 00:06:16.657 18:37:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:16.657 18:37:17 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:16.657 18:37:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:16.657 18:37:17 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:06:16.657 18:37:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:16.657 18:37:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:16.657 18:37:17 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:06:16.657 18:37:17 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:16.657 18:37:17 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:06:16.657 18:37:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:16.657 18:37:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:16.657 18:37:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:16.917 18:37:17 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:06:16.917 18:37:17 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:16.917 18:37:17 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # jq '.[].num_blocks' 00:06:16.917 18:37:17 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:16.917 18:37:17 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:16.917 18:37:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:16.917 18:37:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:16.917 [2024-12-15 18:37:17.131627] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:16.917 18:37:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:16.917 18:37:17 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:16.917 18:37:17 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:16.917 18:37:17 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # (( 393216 == 393216 )) 00:06:16.917 18:37:17 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:06:16.917 18:37:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:16.917 18:37:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:16.917 [2024-12-15 18:37:17.179283] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:06:16.917 [2024-12-15 18:37:17.179407] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:06:16.917 [2024-12-15 18:37:17.179425] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:16.918 [2024-12-15 18:37:17.179437] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:06:16.918 [2024-12-15 18:37:17.179571] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:16.918 [2024-12-15 18:37:17.179609] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:16.918 [2024-12-15 18:37:17.179629] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Raid, state offline 00:06:16.918 18:37:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:16.918 18:37:17 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:16.918 18:37:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:16.918 18:37:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:16.918 [2024-12-15 18:37:17.191225] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:16.918 [2024-12-15 18:37:17.191312] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:16.918 [2024-12-15 18:37:17.191350] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:06:16.918 [2024-12-15 18:37:17.191380] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:16.918 [2024-12-15 18:37:17.193794] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:16.918 [2024-12-15 18:37:17.193901] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:16.918 [2024-12-15 18:37:17.195426] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev a9d6f3b9-5c37-4314-9b34-800e6a257c96 00:06:16.918 [2024-12-15 18:37:17.195529] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev a9d6f3b9-5c37-4314-9b34-800e6a257c96 is claimed 00:06:16.918 [2024-12-15 18:37:17.195654] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev d844ed27-c24b-42d6-9811-cd936ff96eca 00:06:16.918 [2024-12-15 18:37:17.195734] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev d844ed27-c24b-42d6-9811-cd936ff96eca is claimed 00:06:16.918 [2024-12-15 18:37:17.195899] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev d844ed27-c24b-42d6-9811-cd936ff96eca (2) smaller than existing raid bdev Raid (3) 00:06:16.918 [2024-12-15 18:37:17.195965] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev a9d6f3b9-5c37-4314-9b34-800e6a257c96: File exists 00:06:16.918 [2024-12-15 18:37:17.196041] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:06:16.918 pt0 00:06:16.918 [2024-12-15 18:37:17.196080] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 393216, blocklen 512 00:06:16.918 [2024-12-15 18:37:17.196352] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:06:16.918 [2024-12-15 18:37:17.196490] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:06:16.918 [2024-12-15 18:37:17.196500] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000006600 00:06:16.918 [2024-12-15 18:37:17.196613] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:16.918 18:37:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:16.918 18:37:17 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:06:16.918 18:37:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:16.918 18:37:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:16.918 18:37:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:16.918 18:37:17 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:16.918 18:37:17 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:16.918 18:37:17 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:16.918 18:37:17 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # jq '.[].num_blocks' 00:06:16.918 18:37:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:16.918 18:37:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:16.918 [2024-12-15 18:37:17.219669] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:16.918 18:37:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:16.918 18:37:17 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:16.918 18:37:17 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:16.918 18:37:17 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # (( 393216 == 393216 )) 00:06:16.918 18:37:17 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 73521 00:06:16.918 18:37:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 73521 ']' 00:06:16.918 18:37:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 73521 00:06:16.918 18:37:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:06:16.918 18:37:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:16.918 18:37:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73521 00:06:16.918 18:37:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:16.918 18:37:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:16.918 18:37:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73521' 00:06:16.918 killing process with pid 73521 00:06:16.918 18:37:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 73521 00:06:16.918 [2024-12-15 18:37:17.305100] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:16.918 [2024-12-15 18:37:17.305205] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:16.918 [2024-12-15 18:37:17.305275] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:16.918 [2024-12-15 18:37:17.305319] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Raid, state offline 00:06:16.918 18:37:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 73521 00:06:17.177 [2024-12-15 18:37:17.614172] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:17.746 18:37:17 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:06:17.746 00:06:17.746 real 0m2.417s 00:06:17.746 user 0m2.563s 00:06:17.746 sys 0m0.639s 00:06:17.746 18:37:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:17.746 18:37:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:17.746 ************************************ 00:06:17.746 END TEST raid0_resize_superblock_test 00:06:17.746 ************************************ 00:06:17.746 18:37:18 bdev_raid -- bdev/bdev_raid.sh@954 -- # run_test raid1_resize_superblock_test raid_resize_superblock_test 1 00:06:17.746 18:37:18 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:17.746 18:37:18 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:17.746 18:37:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:17.746 ************************************ 00:06:17.746 START TEST raid1_resize_superblock_test 00:06:17.746 ************************************ 00:06:17.746 18:37:18 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 1 00:06:17.746 18:37:18 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=1 00:06:17.746 18:37:18 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=73597 00:06:17.746 18:37:18 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:17.746 18:37:18 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 73597' 00:06:17.746 Process raid pid: 73597 00:06:17.746 18:37:18 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 73597 00:06:17.746 18:37:18 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 73597 ']' 00:06:17.746 18:37:18 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:17.746 18:37:18 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:17.746 18:37:18 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:17.746 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:17.746 18:37:18 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:17.746 18:37:18 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:17.746 [2024-12-15 18:37:18.111042] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:06:17.746 [2024-12-15 18:37:18.111252] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:18.006 [2024-12-15 18:37:18.282111] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.006 [2024-12-15 18:37:18.322349] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.006 [2024-12-15 18:37:18.398070] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:18.006 [2024-12-15 18:37:18.398209] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:18.575 18:37:18 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:18.575 18:37:18 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:06:18.575 18:37:18 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:06:18.575 18:37:18 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:18.575 18:37:18 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:18.835 malloc0 00:06:18.835 18:37:19 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:18.835 18:37:19 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:18.835 18:37:19 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:18.835 18:37:19 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:18.835 [2024-12-15 18:37:19.159563] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:18.835 [2024-12-15 18:37:19.159650] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:18.835 [2024-12-15 18:37:19.159676] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:06:18.835 [2024-12-15 18:37:19.159696] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:18.835 [2024-12-15 18:37:19.162239] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:18.835 [2024-12-15 18:37:19.162281] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:18.835 pt0 00:06:18.835 18:37:19 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:18.835 18:37:19 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:06:18.835 18:37:19 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:18.835 18:37:19 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:19.094 69207e5d-cdd7-4985-9088-fa97ebe99a1c 00:06:19.094 18:37:19 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:19.094 18:37:19 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:06:19.094 18:37:19 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:19.094 18:37:19 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:19.094 7a52d64e-9f95-45a9-baf8-44028c514493 00:06:19.094 18:37:19 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:19.094 18:37:19 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:06:19.094 18:37:19 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:19.094 18:37:19 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:19.094 529b6c84-d135-46a7-8462-c3473cf4a0be 00:06:19.094 18:37:19 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:19.094 18:37:19 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:06:19.094 18:37:19 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@871 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:06:19.094 18:37:19 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:19.094 18:37:19 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:19.094 [2024-12-15 18:37:19.369985] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 7a52d64e-9f95-45a9-baf8-44028c514493 is claimed 00:06:19.094 [2024-12-15 18:37:19.370193] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 529b6c84-d135-46a7-8462-c3473cf4a0be is claimed 00:06:19.094 [2024-12-15 18:37:19.370314] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:06:19.094 [2024-12-15 18:37:19.370329] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 122880, blocklen 512 00:06:19.094 [2024-12-15 18:37:19.370600] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:06:19.094 [2024-12-15 18:37:19.370804] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:06:19.094 [2024-12-15 18:37:19.370843] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000006280 00:06:19.094 [2024-12-15 18:37:19.370988] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:19.094 18:37:19 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:19.094 18:37:19 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:19.094 18:37:19 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:06:19.094 18:37:19 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:19.094 18:37:19 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:19.094 18:37:19 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:19.094 18:37:19 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:06:19.094 18:37:19 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:19.094 18:37:19 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:19.094 18:37:19 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:19.094 18:37:19 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:06:19.094 18:37:19 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:19.094 18:37:19 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:06:19.094 18:37:19 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:19.094 18:37:19 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # jq '.[].num_blocks' 00:06:19.094 18:37:19 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:19.094 18:37:19 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:19.094 18:37:19 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:19.094 18:37:19 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:19.094 [2024-12-15 18:37:19.489948] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:19.094 18:37:19 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:19.094 18:37:19 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:19.094 18:37:19 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:19.094 18:37:19 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # (( 122880 == 122880 )) 00:06:19.094 18:37:19 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:06:19.094 18:37:19 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:19.094 18:37:19 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:19.354 [2024-12-15 18:37:19.533795] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:19.354 [2024-12-15 18:37:19.533833] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '7a52d64e-9f95-45a9-baf8-44028c514493' was resized: old size 131072, new size 204800 00:06:19.354 18:37:19 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:19.354 18:37:19 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:06:19.354 18:37:19 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:19.354 18:37:19 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:19.354 [2024-12-15 18:37:19.545705] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:19.354 [2024-12-15 18:37:19.545786] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '529b6c84-d135-46a7-8462-c3473cf4a0be' was resized: old size 131072, new size 204800 00:06:19.354 [2024-12-15 18:37:19.545872] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 122880 to 196608 00:06:19.354 18:37:19 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:19.354 18:37:19 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:19.354 18:37:19 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:19.354 18:37:19 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:19.354 18:37:19 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:06:19.354 18:37:19 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:19.354 18:37:19 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:06:19.354 18:37:19 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:19.354 18:37:19 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:06:19.354 18:37:19 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:19.354 18:37:19 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:19.354 18:37:19 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:19.354 18:37:19 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:06:19.354 18:37:19 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:19.354 18:37:19 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:19.354 18:37:19 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:19.354 18:37:19 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # jq '.[].num_blocks' 00:06:19.354 18:37:19 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:19.354 18:37:19 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:19.354 [2024-12-15 18:37:19.657619] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:19.354 18:37:19 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:19.354 18:37:19 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:19.354 18:37:19 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:19.354 18:37:19 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # (( 196608 == 196608 )) 00:06:19.354 18:37:19 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:06:19.354 18:37:19 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:19.354 18:37:19 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:19.354 [2024-12-15 18:37:19.705352] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:06:19.354 [2024-12-15 18:37:19.705459] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:06:19.354 [2024-12-15 18:37:19.705502] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:06:19.354 [2024-12-15 18:37:19.705686] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:19.354 [2024-12-15 18:37:19.705907] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:19.354 [2024-12-15 18:37:19.706001] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:19.354 [2024-12-15 18:37:19.706060] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Raid, state offline 00:06:19.354 18:37:19 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:19.354 18:37:19 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:19.354 18:37:19 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:19.354 18:37:19 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:19.354 [2024-12-15 18:37:19.717287] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:19.354 [2024-12-15 18:37:19.717336] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:19.354 [2024-12-15 18:37:19.717356] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:06:19.354 [2024-12-15 18:37:19.717368] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:19.354 [2024-12-15 18:37:19.719828] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:19.354 [2024-12-15 18:37:19.719866] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:19.354 [2024-12-15 18:37:19.721340] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 7a52d64e-9f95-45a9-baf8-44028c514493 00:06:19.354 [2024-12-15 18:37:19.721406] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 7a52d64e-9f95-45a9-baf8-44028c514493 is claimed 00:06:19.354 [2024-12-15 18:37:19.721487] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 529b6c84-d135-46a7-8462-c3473cf4a0be 00:06:19.354 [2024-12-15 18:37:19.721508] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 529b6c84-d135-46a7-8462-c3473cf4a0be is claimed 00:06:19.354 [2024-12-15 18:37:19.721622] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 529b6c84-d135-46a7-8462-c3473cf4a0be (2) smaller than existing raid bdev Raid (3) 00:06:19.354 [2024-12-15 18:37:19.721661] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 7a52d64e-9f95-45a9-baf8-44028c514493: File exists 00:06:19.354 [2024-12-15 18:37:19.721703] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:06:19.354 [2024-12-15 18:37:19.721713] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:06:19.354 [2024-12-15 18:37:19.721996] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:06:19.354 [2024-12-15 18:37:19.722131] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:06:19.354 [2024-12-15 18:37:19.722147] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000006600 00:06:19.354 [2024-12-15 18:37:19.722262] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:19.354 pt0 00:06:19.354 18:37:19 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:19.354 18:37:19 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:06:19.354 18:37:19 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:19.354 18:37:19 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:19.354 18:37:19 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:19.354 18:37:19 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:19.354 18:37:19 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:19.354 18:37:19 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:19.354 18:37:19 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:19.354 18:37:19 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # jq '.[].num_blocks' 00:06:19.354 18:37:19 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:19.354 [2024-12-15 18:37:19.745588] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:19.354 18:37:19 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:19.354 18:37:19 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:19.354 18:37:19 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:19.355 18:37:19 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # (( 196608 == 196608 )) 00:06:19.355 18:37:19 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 73597 00:06:19.355 18:37:19 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 73597 ']' 00:06:19.355 18:37:19 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 73597 00:06:19.355 18:37:19 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:06:19.355 18:37:19 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:19.355 18:37:19 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73597 00:06:19.614 killing process with pid 73597 00:06:19.614 18:37:19 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:19.615 18:37:19 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:19.615 18:37:19 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73597' 00:06:19.615 18:37:19 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 73597 00:06:19.615 [2024-12-15 18:37:19.822533] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:19.615 [2024-12-15 18:37:19.822603] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:19.615 [2024-12-15 18:37:19.822643] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:19.615 [2024-12-15 18:37:19.822651] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Raid, state offline 00:06:19.615 18:37:19 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 73597 00:06:19.874 [2024-12-15 18:37:20.130138] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:20.133 ************************************ 00:06:20.133 END TEST raid1_resize_superblock_test 00:06:20.133 ************************************ 00:06:20.133 18:37:20 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:06:20.133 00:06:20.133 real 0m2.440s 00:06:20.134 user 0m2.573s 00:06:20.134 sys 0m0.666s 00:06:20.134 18:37:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:20.134 18:37:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:20.134 18:37:20 bdev_raid -- bdev/bdev_raid.sh@956 -- # uname -s 00:06:20.134 18:37:20 bdev_raid -- bdev/bdev_raid.sh@956 -- # '[' Linux = Linux ']' 00:06:20.134 18:37:20 bdev_raid -- bdev/bdev_raid.sh@956 -- # modprobe -n nbd 00:06:20.134 18:37:20 bdev_raid -- bdev/bdev_raid.sh@957 -- # has_nbd=true 00:06:20.134 18:37:20 bdev_raid -- bdev/bdev_raid.sh@958 -- # modprobe nbd 00:06:20.134 18:37:20 bdev_raid -- bdev/bdev_raid.sh@959 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:06:20.134 18:37:20 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:20.134 18:37:20 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:20.134 18:37:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:20.134 ************************************ 00:06:20.134 START TEST raid_function_test_raid0 00:06:20.134 ************************************ 00:06:20.134 18:37:20 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1129 -- # raid_function_test raid0 00:06:20.134 18:37:20 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@64 -- # local raid_level=raid0 00:06:20.134 18:37:20 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:06:20.134 18:37:20 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:06:20.134 18:37:20 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # raid_pid=73673 00:06:20.134 18:37:20 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:20.134 18:37:20 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 73673' 00:06:20.134 Process raid pid: 73673 00:06:20.134 18:37:20 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@71 -- # waitforlisten 73673 00:06:20.134 18:37:20 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@835 -- # '[' -z 73673 ']' 00:06:20.134 18:37:20 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:20.134 18:37:20 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:20.134 18:37:20 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:20.134 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:20.134 18:37:20 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:20.134 18:37:20 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:20.394 [2024-12-15 18:37:20.652284] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:06:20.394 [2024-12-15 18:37:20.652429] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:20.394 [2024-12-15 18:37:20.827715] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.654 [2024-12-15 18:37:20.866615] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.654 [2024-12-15 18:37:20.944412] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:20.654 [2024-12-15 18:37:20.944456] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:21.224 18:37:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:21.224 18:37:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@868 -- # return 0 00:06:21.224 18:37:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:06:21.224 18:37:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.224 18:37:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:21.224 Base_1 00:06:21.224 18:37:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.224 18:37:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:06:21.224 18:37:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.224 18:37:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:21.224 Base_2 00:06:21.224 18:37:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.224 18:37:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''Base_1 Base_2'\''' -n raid 00:06:21.224 18:37:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.224 18:37:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:21.224 [2024-12-15 18:37:21.537745] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:06:21.224 [2024-12-15 18:37:21.539911] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:06:21.224 [2024-12-15 18:37:21.539973] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:06:21.224 [2024-12-15 18:37:21.539991] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:06:21.224 [2024-12-15 18:37:21.540261] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:06:21.224 [2024-12-15 18:37:21.540390] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:06:21.224 [2024-12-15 18:37:21.540398] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000006280 00:06:21.224 [2024-12-15 18:37:21.540518] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:21.224 18:37:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.224 18:37:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:06:21.224 18:37:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:06:21.224 18:37:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.224 18:37:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:21.224 18:37:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.224 18:37:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:06:21.224 18:37:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:06:21.224 18:37:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:06:21.224 18:37:21 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:06:21.224 18:37:21 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:06:21.224 18:37:21 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:21.225 18:37:21 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:06:21.225 18:37:21 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:21.225 18:37:21 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:06:21.225 18:37:21 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:21.225 18:37:21 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:06:21.225 18:37:21 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:06:21.484 [2024-12-15 18:37:21.785398] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:21.484 /dev/nbd0 00:06:21.484 18:37:21 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:21.484 18:37:21 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:21.484 18:37:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:21.484 18:37:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # local i 00:06:21.484 18:37:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:21.484 18:37:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:21.484 18:37:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:21.484 18:37:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@877 -- # break 00:06:21.484 18:37:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:21.484 18:37:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:21.484 18:37:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:21.484 1+0 records in 00:06:21.484 1+0 records out 00:06:21.484 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000588338 s, 7.0 MB/s 00:06:21.484 18:37:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:21.484 18:37:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # size=4096 00:06:21.484 18:37:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:21.484 18:37:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:21.484 18:37:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@893 -- # return 0 00:06:21.484 18:37:21 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:21.484 18:37:21 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:06:21.484 18:37:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:06:21.484 18:37:21 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:06:21.484 18:37:21 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:06:21.744 18:37:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:21.744 { 00:06:21.744 "nbd_device": "/dev/nbd0", 00:06:21.744 "bdev_name": "raid" 00:06:21.744 } 00:06:21.744 ]' 00:06:21.744 18:37:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:21.744 { 00:06:21.744 "nbd_device": "/dev/nbd0", 00:06:21.744 "bdev_name": "raid" 00:06:21.744 } 00:06:21.744 ]' 00:06:21.744 18:37:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:21.744 18:37:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:06:21.744 18:37:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:06:21.744 18:37:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:21.744 18:37:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:06:21.744 18:37:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:06:21.744 18:37:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # count=1 00:06:21.744 18:37:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:06:21.744 18:37:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:06:21.744 18:37:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:06:21.744 18:37:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:06:21.744 18:37:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local blksize 00:06:21.744 18:37:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:06:21.744 18:37:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:06:21.744 18:37:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:06:21.744 18:37:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # blksize=512 00:06:21.744 18:37:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:06:21.744 18:37:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:06:21.744 18:37:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:06:21.744 18:37:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:06:21.744 18:37:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:06:21.744 18:37:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:06:21.744 18:37:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:06:21.744 18:37:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:06:21.744 18:37:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:06:21.744 4096+0 records in 00:06:21.744 4096+0 records out 00:06:21.744 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0345233 s, 60.7 MB/s 00:06:21.744 18:37:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:06:22.003 4096+0 records in 00:06:22.003 4096+0 records out 00:06:22.003 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.224764 s, 9.3 MB/s 00:06:22.003 18:37:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:06:22.003 18:37:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:22.003 18:37:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:06:22.003 18:37:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:22.003 18:37:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:06:22.003 18:37:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:06:22.003 18:37:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:06:22.003 128+0 records in 00:06:22.003 128+0 records out 00:06:22.003 65536 bytes (66 kB, 64 KiB) copied, 0.00136354 s, 48.1 MB/s 00:06:22.003 18:37:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:06:22.003 18:37:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:22.399 18:37:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:22.399 18:37:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:22.399 18:37:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:22.399 18:37:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:06:22.399 18:37:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:06:22.399 18:37:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:06:22.399 2035+0 records in 00:06:22.399 2035+0 records out 00:06:22.399 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0152736 s, 68.2 MB/s 00:06:22.399 18:37:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:06:22.399 18:37:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:22.399 18:37:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:22.399 18:37:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:22.399 18:37:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:22.399 18:37:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:06:22.399 18:37:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:06:22.399 18:37:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:06:22.399 456+0 records in 00:06:22.399 456+0 records out 00:06:22.399 233472 bytes (233 kB, 228 KiB) copied, 0.00423745 s, 55.1 MB/s 00:06:22.399 18:37:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:06:22.399 18:37:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:22.399 18:37:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:22.399 18:37:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:22.399 18:37:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:22.399 18:37:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@52 -- # return 0 00:06:22.399 18:37:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:06:22.399 18:37:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:06:22.399 18:37:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:06:22.399 18:37:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:22.399 18:37:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:06:22.400 18:37:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:22.400 18:37:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:06:22.400 18:37:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:22.400 [2024-12-15 18:37:22.742607] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:22.400 18:37:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:22.400 18:37:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:22.400 18:37:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:22.400 18:37:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:22.400 18:37:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:22.400 18:37:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:06:22.400 18:37:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:06:22.400 18:37:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:06:22.400 18:37:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:06:22.400 18:37:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:06:22.666 18:37:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:22.666 18:37:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:22.666 18:37:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:22.666 18:37:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:22.666 18:37:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:22.666 18:37:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:22.666 18:37:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:06:22.666 18:37:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:06:22.666 18:37:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:22.666 18:37:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # count=0 00:06:22.666 18:37:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:06:22.666 18:37:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # killprocess 73673 00:06:22.666 18:37:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # '[' -z 73673 ']' 00:06:22.666 18:37:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@958 -- # kill -0 73673 00:06:22.666 18:37:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # uname 00:06:22.666 18:37:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:22.666 18:37:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73673 00:06:22.666 killing process with pid 73673 00:06:22.666 18:37:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:22.666 18:37:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:22.666 18:37:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73673' 00:06:22.666 18:37:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@973 -- # kill 73673 00:06:22.666 [2024-12-15 18:37:23.062638] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:22.666 [2024-12-15 18:37:23.062778] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:22.666 18:37:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@978 -- # wait 73673 00:06:22.666 [2024-12-15 18:37:23.062851] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:22.666 [2024-12-15 18:37:23.062867] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid, state offline 00:06:22.666 [2024-12-15 18:37:23.104744] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:23.236 18:37:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@99 -- # return 0 00:06:23.236 00:06:23.236 real 0m2.872s 00:06:23.236 user 0m3.406s 00:06:23.236 sys 0m1.040s 00:06:23.236 18:37:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:23.236 18:37:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:23.236 ************************************ 00:06:23.236 END TEST raid_function_test_raid0 00:06:23.236 ************************************ 00:06:23.236 18:37:23 bdev_raid -- bdev/bdev_raid.sh@960 -- # run_test raid_function_test_concat raid_function_test concat 00:06:23.236 18:37:23 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:23.236 18:37:23 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:23.236 18:37:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:23.236 ************************************ 00:06:23.236 START TEST raid_function_test_concat 00:06:23.236 ************************************ 00:06:23.236 18:37:23 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1129 -- # raid_function_test concat 00:06:23.236 18:37:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@64 -- # local raid_level=concat 00:06:23.236 18:37:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:06:23.236 18:37:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:06:23.236 Process raid pid: 73794 00:06:23.236 18:37:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # raid_pid=73794 00:06:23.236 18:37:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:23.236 18:37:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 73794' 00:06:23.236 18:37:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@71 -- # waitforlisten 73794 00:06:23.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:23.236 18:37:23 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@835 -- # '[' -z 73794 ']' 00:06:23.236 18:37:23 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:23.236 18:37:23 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:23.236 18:37:23 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:23.236 18:37:23 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:23.236 18:37:23 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:23.236 [2024-12-15 18:37:23.600664] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:06:23.236 [2024-12-15 18:37:23.600825] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:23.495 [2024-12-15 18:37:23.778875] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.495 [2024-12-15 18:37:23.818750] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.495 [2024-12-15 18:37:23.896826] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:23.495 [2024-12-15 18:37:23.896951] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:24.064 18:37:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:24.064 18:37:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@868 -- # return 0 00:06:24.064 18:37:24 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:06:24.064 18:37:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:24.064 18:37:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:24.064 Base_1 00:06:24.064 18:37:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:24.064 18:37:24 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:06:24.064 18:37:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:24.064 18:37:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:24.064 Base_2 00:06:24.064 18:37:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:24.064 18:37:24 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''Base_1 Base_2'\''' -n raid 00:06:24.065 18:37:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:24.065 18:37:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:24.065 [2024-12-15 18:37:24.501974] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:06:24.325 [2024-12-15 18:37:24.504087] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:06:24.325 [2024-12-15 18:37:24.504174] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:06:24.325 [2024-12-15 18:37:24.504187] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:06:24.325 [2024-12-15 18:37:24.504475] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:06:24.325 [2024-12-15 18:37:24.504605] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:06:24.325 [2024-12-15 18:37:24.504613] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000006280 00:06:24.325 [2024-12-15 18:37:24.504751] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:24.325 18:37:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:24.325 18:37:24 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:06:24.325 18:37:24 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:06:24.325 18:37:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:24.325 18:37:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:24.325 18:37:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:24.325 18:37:24 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:06:24.325 18:37:24 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:06:24.325 18:37:24 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:06:24.325 18:37:24 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:06:24.325 18:37:24 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:06:24.325 18:37:24 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:24.325 18:37:24 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:06:24.325 18:37:24 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:24.325 18:37:24 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:06:24.325 18:37:24 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:24.325 18:37:24 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:06:24.325 18:37:24 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:06:24.325 [2024-12-15 18:37:24.721593] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:24.325 /dev/nbd0 00:06:24.325 18:37:24 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:24.325 18:37:24 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:24.325 18:37:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:24.325 18:37:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # local i 00:06:24.325 18:37:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:24.326 18:37:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:24.326 18:37:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:24.326 18:37:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@877 -- # break 00:06:24.326 18:37:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:24.326 18:37:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:24.326 18:37:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:24.585 1+0 records in 00:06:24.585 1+0 records out 00:06:24.585 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000344488 s, 11.9 MB/s 00:06:24.585 18:37:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:24.585 18:37:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # size=4096 00:06:24.585 18:37:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:24.585 18:37:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:24.585 18:37:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@893 -- # return 0 00:06:24.585 18:37:24 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:24.585 18:37:24 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:06:24.585 18:37:24 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:06:24.585 18:37:24 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:06:24.585 18:37:24 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:06:24.585 18:37:24 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:24.585 { 00:06:24.585 "nbd_device": "/dev/nbd0", 00:06:24.585 "bdev_name": "raid" 00:06:24.585 } 00:06:24.585 ]' 00:06:24.585 18:37:24 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:24.585 { 00:06:24.585 "nbd_device": "/dev/nbd0", 00:06:24.585 "bdev_name": "raid" 00:06:24.585 } 00:06:24.585 ]' 00:06:24.586 18:37:24 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:24.586 18:37:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:06:24.586 18:37:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:06:24.586 18:37:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:24.845 18:37:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:06:24.845 18:37:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:06:24.845 18:37:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # count=1 00:06:24.845 18:37:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:06:24.845 18:37:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:06:24.845 18:37:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:06:24.845 18:37:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:06:24.845 18:37:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local blksize 00:06:24.845 18:37:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:06:24.845 18:37:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:06:24.845 18:37:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:06:24.845 18:37:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # blksize=512 00:06:24.845 18:37:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:06:24.845 18:37:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:06:24.845 18:37:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:06:24.845 18:37:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:06:24.845 18:37:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:06:24.845 18:37:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:06:24.845 18:37:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:06:24.845 18:37:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:06:24.845 18:37:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:06:24.845 4096+0 records in 00:06:24.845 4096+0 records out 00:06:24.845 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.031072 s, 67.5 MB/s 00:06:24.845 18:37:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:06:25.106 4096+0 records in 00:06:25.106 4096+0 records out 00:06:25.106 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.22561 s, 9.3 MB/s 00:06:25.106 18:37:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:06:25.106 18:37:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:25.106 18:37:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:06:25.106 18:37:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:25.106 18:37:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:06:25.106 18:37:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:06:25.106 18:37:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:06:25.106 128+0 records in 00:06:25.106 128+0 records out 00:06:25.106 65536 bytes (66 kB, 64 KiB) copied, 0.00100454 s, 65.2 MB/s 00:06:25.106 18:37:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:06:25.106 18:37:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:25.106 18:37:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:25.106 18:37:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:25.106 18:37:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:25.106 18:37:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:06:25.106 18:37:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:06:25.106 18:37:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:06:25.106 2035+0 records in 00:06:25.106 2035+0 records out 00:06:25.106 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.012244 s, 85.1 MB/s 00:06:25.106 18:37:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:06:25.106 18:37:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:25.106 18:37:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:25.106 18:37:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:25.106 18:37:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:25.106 18:37:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:06:25.106 18:37:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:06:25.106 18:37:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:06:25.106 456+0 records in 00:06:25.106 456+0 records out 00:06:25.106 233472 bytes (233 kB, 228 KiB) copied, 0.00409974 s, 56.9 MB/s 00:06:25.106 18:37:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:06:25.106 18:37:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:25.106 18:37:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:25.106 18:37:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:25.106 18:37:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:25.106 18:37:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@52 -- # return 0 00:06:25.106 18:37:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:06:25.106 18:37:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:06:25.106 18:37:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:06:25.106 18:37:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:25.106 18:37:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:06:25.106 18:37:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:25.106 18:37:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:06:25.366 18:37:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:25.366 [2024-12-15 18:37:25.643405] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:25.366 18:37:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:25.366 18:37:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:25.366 18:37:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:25.366 18:37:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:25.366 18:37:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:25.366 18:37:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:06:25.366 18:37:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:06:25.366 18:37:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:06:25.366 18:37:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:06:25.366 18:37:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:06:25.626 18:37:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:25.626 18:37:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:25.626 18:37:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:25.626 18:37:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:25.626 18:37:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:25.626 18:37:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:25.626 18:37:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:06:25.626 18:37:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:06:25.626 18:37:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:25.626 18:37:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # count=0 00:06:25.626 18:37:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:06:25.626 18:37:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # killprocess 73794 00:06:25.626 18:37:25 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # '[' -z 73794 ']' 00:06:25.626 18:37:25 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@958 -- # kill -0 73794 00:06:25.626 18:37:25 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # uname 00:06:25.626 18:37:25 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:25.626 18:37:25 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73794 00:06:25.626 killing process with pid 73794 00:06:25.626 18:37:25 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:25.626 18:37:25 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:25.626 18:37:25 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73794' 00:06:25.626 18:37:25 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@973 -- # kill 73794 00:06:25.626 [2024-12-15 18:37:25.976508] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:25.626 18:37:25 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@978 -- # wait 73794 00:06:25.626 [2024-12-15 18:37:25.976677] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:25.626 [2024-12-15 18:37:25.976754] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:25.626 [2024-12-15 18:37:25.976768] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid, state offline 00:06:25.626 [2024-12-15 18:37:26.018439] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:26.196 18:37:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@99 -- # return 0 00:06:26.196 00:06:26.196 real 0m2.834s 00:06:26.196 user 0m3.401s 00:06:26.196 sys 0m1.001s 00:06:26.196 18:37:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:26.196 18:37:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:26.196 ************************************ 00:06:26.196 END TEST raid_function_test_concat 00:06:26.196 ************************************ 00:06:26.196 18:37:26 bdev_raid -- bdev/bdev_raid.sh@963 -- # run_test raid0_resize_test raid_resize_test 0 00:06:26.196 18:37:26 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:26.196 18:37:26 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:26.196 18:37:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:26.196 ************************************ 00:06:26.196 START TEST raid0_resize_test 00:06:26.196 ************************************ 00:06:26.196 18:37:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 0 00:06:26.196 18:37:26 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=0 00:06:26.196 18:37:26 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:06:26.196 18:37:26 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:06:26.196 18:37:26 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:06:26.196 18:37:26 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:06:26.196 18:37:26 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:06:26.196 18:37:26 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:06:26.196 18:37:26 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:06:26.196 18:37:26 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=73911 00:06:26.196 18:37:26 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:26.196 18:37:26 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 73911' 00:06:26.196 Process raid pid: 73911 00:06:26.196 18:37:26 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 73911 00:06:26.196 18:37:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@835 -- # '[' -z 73911 ']' 00:06:26.196 18:37:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:26.196 18:37:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:26.196 18:37:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:26.196 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:26.196 18:37:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:26.196 18:37:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:26.196 [2024-12-15 18:37:26.504937] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:06:26.196 [2024-12-15 18:37:26.505175] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:26.455 [2024-12-15 18:37:26.681385] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.455 [2024-12-15 18:37:26.719380] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.455 [2024-12-15 18:37:26.795122] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:26.455 [2024-12-15 18:37:26.795166] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:27.025 18:37:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:27.025 18:37:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@868 -- # return 0 00:06:27.025 18:37:27 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:06:27.025 18:37:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:27.025 18:37:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:27.025 Base_1 00:06:27.025 18:37:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:27.025 18:37:27 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:06:27.025 18:37:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:27.025 18:37:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:27.025 Base_2 00:06:27.025 18:37:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:27.025 18:37:27 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 0 -eq 0 ']' 00:06:27.025 18:37:27 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # rpc_cmd bdev_raid_create -z 64 -r 0 -b ''\''Base_1 Base_2'\''' -n Raid 00:06:27.025 18:37:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:27.025 18:37:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:27.025 [2024-12-15 18:37:27.369084] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:06:27.026 [2024-12-15 18:37:27.371219] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:06:27.026 [2024-12-15 18:37:27.371273] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:06:27.026 [2024-12-15 18:37:27.371283] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:06:27.026 [2024-12-15 18:37:27.371537] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:06:27.026 [2024-12-15 18:37:27.371645] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:06:27.026 [2024-12-15 18:37:27.371653] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000006280 00:06:27.026 [2024-12-15 18:37:27.371773] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:27.026 18:37:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:27.026 18:37:27 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:06:27.026 18:37:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:27.026 18:37:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:27.026 [2024-12-15 18:37:27.381042] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:27.026 [2024-12-15 18:37:27.381068] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:06:27.026 true 00:06:27.026 18:37:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:27.026 18:37:27 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:27.026 18:37:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:27.026 18:37:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:27.026 18:37:27 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:06:27.026 [2024-12-15 18:37:27.393202] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:27.026 18:37:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:27.026 18:37:27 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=131072 00:06:27.026 18:37:27 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=64 00:06:27.026 18:37:27 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 0 -eq 0 ']' 00:06:27.026 18:37:27 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # expected_size=64 00:06:27.026 18:37:27 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 64 '!=' 64 ']' 00:06:27.026 18:37:27 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:06:27.026 18:37:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:27.026 18:37:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:27.026 [2024-12-15 18:37:27.440953] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:27.026 [2024-12-15 18:37:27.441037] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:06:27.026 [2024-12-15 18:37:27.441098] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:06:27.026 true 00:06:27.026 18:37:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:27.026 18:37:27 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:27.026 18:37:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:27.026 18:37:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:27.026 18:37:27 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:06:27.026 [2024-12-15 18:37:27.453122] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:27.026 18:37:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:27.286 18:37:27 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=262144 00:06:27.286 18:37:27 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=128 00:06:27.286 18:37:27 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 0 -eq 0 ']' 00:06:27.286 18:37:27 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@378 -- # expected_size=128 00:06:27.286 18:37:27 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 128 '!=' 128 ']' 00:06:27.286 18:37:27 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 73911 00:06:27.286 18:37:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # '[' -z 73911 ']' 00:06:27.286 18:37:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@958 -- # kill -0 73911 00:06:27.286 18:37:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # uname 00:06:27.286 18:37:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:27.286 18:37:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73911 00:06:27.286 18:37:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:27.286 18:37:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:27.286 18:37:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73911' 00:06:27.286 killing process with pid 73911 00:06:27.286 18:37:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@973 -- # kill 73911 00:06:27.286 [2024-12-15 18:37:27.537929] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:27.286 [2024-12-15 18:37:27.538076] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:27.286 18:37:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@978 -- # wait 73911 00:06:27.286 [2024-12-15 18:37:27.538157] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:27.286 [2024-12-15 18:37:27.538169] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Raid, state offline 00:06:27.286 [2024-12-15 18:37:27.540360] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:27.545 ************************************ 00:06:27.545 END TEST raid0_resize_test 00:06:27.545 ************************************ 00:06:27.545 18:37:27 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:06:27.545 00:06:27.545 real 0m1.455s 00:06:27.545 user 0m1.557s 00:06:27.545 sys 0m0.360s 00:06:27.545 18:37:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:27.545 18:37:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:27.545 18:37:27 bdev_raid -- bdev/bdev_raid.sh@964 -- # run_test raid1_resize_test raid_resize_test 1 00:06:27.545 18:37:27 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:27.545 18:37:27 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:27.545 18:37:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:27.545 ************************************ 00:06:27.545 START TEST raid1_resize_test 00:06:27.545 ************************************ 00:06:27.545 18:37:27 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 1 00:06:27.545 18:37:27 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=1 00:06:27.545 18:37:27 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:06:27.545 18:37:27 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:06:27.545 18:37:27 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:06:27.545 18:37:27 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:06:27.545 18:37:27 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:06:27.545 18:37:27 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:06:27.545 18:37:27 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:06:27.545 18:37:27 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:27.546 18:37:27 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=73956 00:06:27.546 18:37:27 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 73956' 00:06:27.546 Process raid pid: 73956 00:06:27.546 18:37:27 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 73956 00:06:27.546 18:37:27 bdev_raid.raid1_resize_test -- common/autotest_common.sh@835 -- # '[' -z 73956 ']' 00:06:27.546 18:37:27 bdev_raid.raid1_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:27.546 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:27.546 18:37:27 bdev_raid.raid1_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:27.546 18:37:27 bdev_raid.raid1_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:27.546 18:37:27 bdev_raid.raid1_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:27.546 18:37:27 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:27.804 [2024-12-15 18:37:28.022542] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:06:27.804 [2024-12-15 18:37:28.022669] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:27.804 [2024-12-15 18:37:28.196207] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.804 [2024-12-15 18:37:28.234786] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.061 [2024-12-15 18:37:28.310718] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:28.061 [2024-12-15 18:37:28.310765] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:28.628 18:37:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:28.628 18:37:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@868 -- # return 0 00:06:28.628 18:37:28 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:06:28.628 18:37:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:28.628 18:37:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:28.628 Base_1 00:06:28.628 18:37:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:28.628 18:37:28 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:06:28.628 18:37:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:28.628 18:37:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:28.628 Base_2 00:06:28.628 18:37:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:28.628 18:37:28 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 1 -eq 0 ']' 00:06:28.628 18:37:28 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@352 -- # rpc_cmd bdev_raid_create -r 1 -b ''\''Base_1 Base_2'\''' -n Raid 00:06:28.628 18:37:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:28.628 18:37:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:28.628 [2024-12-15 18:37:28.872547] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:06:28.628 [2024-12-15 18:37:28.874625] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:06:28.628 [2024-12-15 18:37:28.874681] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:06:28.628 [2024-12-15 18:37:28.874697] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:06:28.628 [2024-12-15 18:37:28.874988] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:06:28.628 [2024-12-15 18:37:28.875111] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:06:28.628 [2024-12-15 18:37:28.875120] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000006280 00:06:28.628 [2024-12-15 18:37:28.875232] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:28.628 18:37:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:28.628 18:37:28 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:06:28.628 18:37:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:28.628 18:37:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:28.628 [2024-12-15 18:37:28.880518] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:28.628 [2024-12-15 18:37:28.880549] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:06:28.628 true 00:06:28.628 18:37:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:28.628 18:37:28 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:28.628 18:37:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:28.628 18:37:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:28.628 18:37:28 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:06:28.628 [2024-12-15 18:37:28.892657] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:28.628 18:37:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:28.628 18:37:28 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=65536 00:06:28.628 18:37:28 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=32 00:06:28.628 18:37:28 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 1 -eq 0 ']' 00:06:28.628 18:37:28 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@364 -- # expected_size=32 00:06:28.628 18:37:28 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 32 '!=' 32 ']' 00:06:28.628 18:37:28 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:06:28.628 18:37:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:28.628 18:37:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:28.628 [2024-12-15 18:37:28.944405] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:28.628 [2024-12-15 18:37:28.944430] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:06:28.628 [2024-12-15 18:37:28.944461] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 65536 to 131072 00:06:28.628 true 00:06:28.628 18:37:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:28.628 18:37:28 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:28.628 18:37:28 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:06:28.628 18:37:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:28.628 18:37:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:28.628 [2024-12-15 18:37:28.960559] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:28.628 18:37:28 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:28.628 18:37:28 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=131072 00:06:28.628 18:37:28 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=64 00:06:28.628 18:37:28 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 1 -eq 0 ']' 00:06:28.628 18:37:29 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@380 -- # expected_size=64 00:06:28.628 18:37:29 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 64 '!=' 64 ']' 00:06:28.628 18:37:29 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 73956 00:06:28.628 18:37:29 bdev_raid.raid1_resize_test -- common/autotest_common.sh@954 -- # '[' -z 73956 ']' 00:06:28.628 18:37:29 bdev_raid.raid1_resize_test -- common/autotest_common.sh@958 -- # kill -0 73956 00:06:28.628 18:37:29 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # uname 00:06:28.628 18:37:29 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:28.628 18:37:29 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73956 00:06:28.628 18:37:29 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:28.628 18:37:29 bdev_raid.raid1_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:28.628 18:37:29 bdev_raid.raid1_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73956' 00:06:28.628 killing process with pid 73956 00:06:28.628 18:37:29 bdev_raid.raid1_resize_test -- common/autotest_common.sh@973 -- # kill 73956 00:06:28.628 [2024-12-15 18:37:29.041734] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:28.628 18:37:29 bdev_raid.raid1_resize_test -- common/autotest_common.sh@978 -- # wait 73956 00:06:28.628 [2024-12-15 18:37:29.041953] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:28.628 [2024-12-15 18:37:29.042454] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:28.628 [2024-12-15 18:37:29.042528] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Raid, state offline 00:06:28.628 [2024-12-15 18:37:29.044335] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:29.199 18:37:29 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:06:29.199 00:06:29.199 real 0m1.437s 00:06:29.199 user 0m1.536s 00:06:29.199 sys 0m0.372s 00:06:29.199 18:37:29 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:29.199 18:37:29 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:29.199 ************************************ 00:06:29.199 END TEST raid1_resize_test 00:06:29.199 ************************************ 00:06:29.199 18:37:29 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:06:29.199 18:37:29 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:06:29.199 18:37:29 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:06:29.199 18:37:29 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:29.199 18:37:29 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:29.199 18:37:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:29.199 ************************************ 00:06:29.199 START TEST raid_state_function_test 00:06:29.199 ************************************ 00:06:29.199 18:37:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 false 00:06:29.199 18:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:06:29.199 18:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:06:29.199 18:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:06:29.199 18:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:06:29.199 18:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:06:29.199 18:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:29.199 18:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:06:29.199 18:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:06:29.199 18:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:29.199 18:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:06:29.199 18:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:06:29.199 18:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:29.199 18:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:06:29.199 18:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:06:29.199 18:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:06:29.199 18:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:06:29.199 18:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:06:29.199 18:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:06:29.199 18:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:06:29.199 18:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:06:29.199 18:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:06:29.199 18:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:06:29.199 18:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:06:29.199 18:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=74013 00:06:29.199 18:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:29.199 18:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 74013' 00:06:29.199 Process raid pid: 74013 00:06:29.200 18:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 74013 00:06:29.200 18:37:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 74013 ']' 00:06:29.200 18:37:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:29.200 18:37:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:29.200 18:37:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:29.200 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:29.200 18:37:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:29.200 18:37:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:29.200 [2024-12-15 18:37:29.534318] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:06:29.200 [2024-12-15 18:37:29.534543] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:29.459 [2024-12-15 18:37:29.704530] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.459 [2024-12-15 18:37:29.742303] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.459 [2024-12-15 18:37:29.818322] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:29.459 [2024-12-15 18:37:29.818456] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:30.028 18:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:30.028 18:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:06:30.028 18:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:30.028 18:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:30.028 18:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:30.028 [2024-12-15 18:37:30.368277] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:30.028 [2024-12-15 18:37:30.368345] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:30.028 [2024-12-15 18:37:30.368356] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:30.028 [2024-12-15 18:37:30.368369] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:30.028 18:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:30.028 18:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:06:30.028 18:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:30.028 18:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:30.028 18:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:30.028 18:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:30.028 18:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:30.028 18:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:30.028 18:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:30.028 18:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:30.028 18:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:30.028 18:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:30.028 18:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:30.028 18:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:30.028 18:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:30.028 18:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:30.028 18:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:30.028 "name": "Existed_Raid", 00:06:30.028 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:30.028 "strip_size_kb": 64, 00:06:30.028 "state": "configuring", 00:06:30.028 "raid_level": "raid0", 00:06:30.028 "superblock": false, 00:06:30.028 "num_base_bdevs": 2, 00:06:30.028 "num_base_bdevs_discovered": 0, 00:06:30.028 "num_base_bdevs_operational": 2, 00:06:30.028 "base_bdevs_list": [ 00:06:30.028 { 00:06:30.028 "name": "BaseBdev1", 00:06:30.028 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:30.028 "is_configured": false, 00:06:30.028 "data_offset": 0, 00:06:30.028 "data_size": 0 00:06:30.028 }, 00:06:30.028 { 00:06:30.028 "name": "BaseBdev2", 00:06:30.028 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:30.028 "is_configured": false, 00:06:30.028 "data_offset": 0, 00:06:30.028 "data_size": 0 00:06:30.028 } 00:06:30.028 ] 00:06:30.028 }' 00:06:30.028 18:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:30.028 18:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:30.602 18:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:06:30.602 18:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:30.602 18:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:30.602 [2024-12-15 18:37:30.815427] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:30.602 [2024-12-15 18:37:30.815536] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:06:30.602 18:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:30.602 18:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:30.602 18:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:30.602 18:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:30.602 [2024-12-15 18:37:30.823417] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:30.602 [2024-12-15 18:37:30.823498] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:30.602 [2024-12-15 18:37:30.823523] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:30.602 [2024-12-15 18:37:30.823545] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:30.602 18:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:30.602 18:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:06:30.602 18:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:30.602 18:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:30.602 [2024-12-15 18:37:30.846489] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:30.602 BaseBdev1 00:06:30.602 18:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:30.602 18:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:06:30.602 18:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:06:30.602 18:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:06:30.602 18:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:06:30.602 18:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:06:30.602 18:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:06:30.602 18:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:06:30.602 18:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:30.602 18:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:30.602 18:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:30.602 18:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:06:30.602 18:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:30.602 18:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:30.602 [ 00:06:30.602 { 00:06:30.602 "name": "BaseBdev1", 00:06:30.602 "aliases": [ 00:06:30.602 "76cae340-6504-4384-9d03-222a388749a8" 00:06:30.602 ], 00:06:30.602 "product_name": "Malloc disk", 00:06:30.602 "block_size": 512, 00:06:30.602 "num_blocks": 65536, 00:06:30.602 "uuid": "76cae340-6504-4384-9d03-222a388749a8", 00:06:30.602 "assigned_rate_limits": { 00:06:30.602 "rw_ios_per_sec": 0, 00:06:30.602 "rw_mbytes_per_sec": 0, 00:06:30.602 "r_mbytes_per_sec": 0, 00:06:30.602 "w_mbytes_per_sec": 0 00:06:30.602 }, 00:06:30.602 "claimed": true, 00:06:30.602 "claim_type": "exclusive_write", 00:06:30.602 "zoned": false, 00:06:30.602 "supported_io_types": { 00:06:30.602 "read": true, 00:06:30.602 "write": true, 00:06:30.602 "unmap": true, 00:06:30.602 "flush": true, 00:06:30.602 "reset": true, 00:06:30.602 "nvme_admin": false, 00:06:30.602 "nvme_io": false, 00:06:30.602 "nvme_io_md": false, 00:06:30.602 "write_zeroes": true, 00:06:30.602 "zcopy": true, 00:06:30.602 "get_zone_info": false, 00:06:30.602 "zone_management": false, 00:06:30.602 "zone_append": false, 00:06:30.602 "compare": false, 00:06:30.602 "compare_and_write": false, 00:06:30.602 "abort": true, 00:06:30.602 "seek_hole": false, 00:06:30.602 "seek_data": false, 00:06:30.602 "copy": true, 00:06:30.602 "nvme_iov_md": false 00:06:30.602 }, 00:06:30.602 "memory_domains": [ 00:06:30.602 { 00:06:30.602 "dma_device_id": "system", 00:06:30.602 "dma_device_type": 1 00:06:30.602 }, 00:06:30.602 { 00:06:30.602 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:30.602 "dma_device_type": 2 00:06:30.602 } 00:06:30.602 ], 00:06:30.602 "driver_specific": {} 00:06:30.602 } 00:06:30.602 ] 00:06:30.602 18:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:30.602 18:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:06:30.602 18:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:06:30.602 18:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:30.602 18:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:30.602 18:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:30.602 18:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:30.602 18:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:30.602 18:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:30.602 18:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:30.602 18:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:30.602 18:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:30.602 18:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:30.602 18:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:30.602 18:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:30.602 18:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:30.602 18:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:30.602 18:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:30.602 "name": "Existed_Raid", 00:06:30.602 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:30.602 "strip_size_kb": 64, 00:06:30.602 "state": "configuring", 00:06:30.602 "raid_level": "raid0", 00:06:30.602 "superblock": false, 00:06:30.602 "num_base_bdevs": 2, 00:06:30.602 "num_base_bdevs_discovered": 1, 00:06:30.602 "num_base_bdevs_operational": 2, 00:06:30.602 "base_bdevs_list": [ 00:06:30.602 { 00:06:30.602 "name": "BaseBdev1", 00:06:30.602 "uuid": "76cae340-6504-4384-9d03-222a388749a8", 00:06:30.602 "is_configured": true, 00:06:30.602 "data_offset": 0, 00:06:30.602 "data_size": 65536 00:06:30.602 }, 00:06:30.602 { 00:06:30.602 "name": "BaseBdev2", 00:06:30.602 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:30.602 "is_configured": false, 00:06:30.602 "data_offset": 0, 00:06:30.602 "data_size": 0 00:06:30.602 } 00:06:30.602 ] 00:06:30.602 }' 00:06:30.602 18:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:30.602 18:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:31.172 18:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:06:31.172 18:37:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:31.172 18:37:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:31.172 [2024-12-15 18:37:31.321766] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:31.172 [2024-12-15 18:37:31.321899] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:06:31.172 18:37:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:31.172 18:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:31.172 18:37:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:31.172 18:37:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:31.172 [2024-12-15 18:37:31.333775] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:31.172 [2024-12-15 18:37:31.336006] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:31.172 [2024-12-15 18:37:31.336081] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:31.172 18:37:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:31.172 18:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:06:31.172 18:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:06:31.172 18:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:06:31.172 18:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:31.172 18:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:31.172 18:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:31.172 18:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:31.172 18:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:31.172 18:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:31.172 18:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:31.172 18:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:31.172 18:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:31.172 18:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:31.172 18:37:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:31.172 18:37:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:31.172 18:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:31.173 18:37:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:31.173 18:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:31.173 "name": "Existed_Raid", 00:06:31.173 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:31.173 "strip_size_kb": 64, 00:06:31.173 "state": "configuring", 00:06:31.173 "raid_level": "raid0", 00:06:31.173 "superblock": false, 00:06:31.173 "num_base_bdevs": 2, 00:06:31.173 "num_base_bdevs_discovered": 1, 00:06:31.173 "num_base_bdevs_operational": 2, 00:06:31.173 "base_bdevs_list": [ 00:06:31.173 { 00:06:31.173 "name": "BaseBdev1", 00:06:31.173 "uuid": "76cae340-6504-4384-9d03-222a388749a8", 00:06:31.173 "is_configured": true, 00:06:31.173 "data_offset": 0, 00:06:31.173 "data_size": 65536 00:06:31.173 }, 00:06:31.173 { 00:06:31.173 "name": "BaseBdev2", 00:06:31.173 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:31.173 "is_configured": false, 00:06:31.173 "data_offset": 0, 00:06:31.173 "data_size": 0 00:06:31.173 } 00:06:31.173 ] 00:06:31.173 }' 00:06:31.173 18:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:31.173 18:37:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:31.433 18:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:06:31.433 18:37:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:31.433 18:37:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:31.433 [2024-12-15 18:37:31.750028] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:06:31.433 [2024-12-15 18:37:31.750161] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:06:31.433 [2024-12-15 18:37:31.750176] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:06:31.433 [2024-12-15 18:37:31.750508] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:06:31.433 [2024-12-15 18:37:31.750681] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:06:31.433 [2024-12-15 18:37:31.750703] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:06:31.433 [2024-12-15 18:37:31.750963] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:31.433 BaseBdev2 00:06:31.433 18:37:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:31.433 18:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:06:31.434 18:37:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:06:31.434 18:37:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:06:31.434 18:37:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:06:31.434 18:37:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:06:31.434 18:37:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:06:31.434 18:37:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:06:31.434 18:37:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:31.434 18:37:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:31.434 18:37:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:31.434 18:37:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:06:31.434 18:37:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:31.434 18:37:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:31.434 [ 00:06:31.434 { 00:06:31.434 "name": "BaseBdev2", 00:06:31.434 "aliases": [ 00:06:31.434 "73aa48e3-6520-4d04-8455-2690bf5e1c2f" 00:06:31.434 ], 00:06:31.434 "product_name": "Malloc disk", 00:06:31.434 "block_size": 512, 00:06:31.434 "num_blocks": 65536, 00:06:31.434 "uuid": "73aa48e3-6520-4d04-8455-2690bf5e1c2f", 00:06:31.434 "assigned_rate_limits": { 00:06:31.434 "rw_ios_per_sec": 0, 00:06:31.434 "rw_mbytes_per_sec": 0, 00:06:31.434 "r_mbytes_per_sec": 0, 00:06:31.434 "w_mbytes_per_sec": 0 00:06:31.434 }, 00:06:31.434 "claimed": true, 00:06:31.434 "claim_type": "exclusive_write", 00:06:31.434 "zoned": false, 00:06:31.434 "supported_io_types": { 00:06:31.434 "read": true, 00:06:31.434 "write": true, 00:06:31.434 "unmap": true, 00:06:31.434 "flush": true, 00:06:31.434 "reset": true, 00:06:31.434 "nvme_admin": false, 00:06:31.434 "nvme_io": false, 00:06:31.434 "nvme_io_md": false, 00:06:31.434 "write_zeroes": true, 00:06:31.434 "zcopy": true, 00:06:31.434 "get_zone_info": false, 00:06:31.434 "zone_management": false, 00:06:31.434 "zone_append": false, 00:06:31.434 "compare": false, 00:06:31.434 "compare_and_write": false, 00:06:31.434 "abort": true, 00:06:31.434 "seek_hole": false, 00:06:31.434 "seek_data": false, 00:06:31.434 "copy": true, 00:06:31.434 "nvme_iov_md": false 00:06:31.434 }, 00:06:31.434 "memory_domains": [ 00:06:31.434 { 00:06:31.434 "dma_device_id": "system", 00:06:31.434 "dma_device_type": 1 00:06:31.434 }, 00:06:31.434 { 00:06:31.434 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:31.434 "dma_device_type": 2 00:06:31.434 } 00:06:31.434 ], 00:06:31.434 "driver_specific": {} 00:06:31.434 } 00:06:31.434 ] 00:06:31.434 18:37:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:31.434 18:37:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:06:31.434 18:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:06:31.434 18:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:06:31.434 18:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:06:31.434 18:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:31.434 18:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:31.434 18:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:31.434 18:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:31.434 18:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:31.434 18:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:31.434 18:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:31.434 18:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:31.434 18:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:31.434 18:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:31.434 18:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:31.434 18:37:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:31.434 18:37:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:31.434 18:37:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:31.434 18:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:31.434 "name": "Existed_Raid", 00:06:31.434 "uuid": "946b0ab5-0a99-4fe2-a02d-a917bc2ae915", 00:06:31.434 "strip_size_kb": 64, 00:06:31.434 "state": "online", 00:06:31.434 "raid_level": "raid0", 00:06:31.434 "superblock": false, 00:06:31.434 "num_base_bdevs": 2, 00:06:31.434 "num_base_bdevs_discovered": 2, 00:06:31.434 "num_base_bdevs_operational": 2, 00:06:31.434 "base_bdevs_list": [ 00:06:31.434 { 00:06:31.434 "name": "BaseBdev1", 00:06:31.434 "uuid": "76cae340-6504-4384-9d03-222a388749a8", 00:06:31.434 "is_configured": true, 00:06:31.434 "data_offset": 0, 00:06:31.434 "data_size": 65536 00:06:31.434 }, 00:06:31.434 { 00:06:31.434 "name": "BaseBdev2", 00:06:31.434 "uuid": "73aa48e3-6520-4d04-8455-2690bf5e1c2f", 00:06:31.434 "is_configured": true, 00:06:31.434 "data_offset": 0, 00:06:31.434 "data_size": 65536 00:06:31.434 } 00:06:31.434 ] 00:06:31.434 }' 00:06:31.434 18:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:31.434 18:37:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:32.002 18:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:06:32.002 18:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:06:32.002 18:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:06:32.002 18:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:06:32.002 18:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:06:32.002 18:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:06:32.002 18:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:06:32.002 18:37:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:32.002 18:37:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:32.002 18:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:06:32.002 [2024-12-15 18:37:32.185589] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:32.002 18:37:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:32.002 18:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:06:32.002 "name": "Existed_Raid", 00:06:32.002 "aliases": [ 00:06:32.002 "946b0ab5-0a99-4fe2-a02d-a917bc2ae915" 00:06:32.002 ], 00:06:32.002 "product_name": "Raid Volume", 00:06:32.002 "block_size": 512, 00:06:32.002 "num_blocks": 131072, 00:06:32.002 "uuid": "946b0ab5-0a99-4fe2-a02d-a917bc2ae915", 00:06:32.002 "assigned_rate_limits": { 00:06:32.002 "rw_ios_per_sec": 0, 00:06:32.002 "rw_mbytes_per_sec": 0, 00:06:32.002 "r_mbytes_per_sec": 0, 00:06:32.002 "w_mbytes_per_sec": 0 00:06:32.002 }, 00:06:32.002 "claimed": false, 00:06:32.002 "zoned": false, 00:06:32.002 "supported_io_types": { 00:06:32.002 "read": true, 00:06:32.002 "write": true, 00:06:32.002 "unmap": true, 00:06:32.002 "flush": true, 00:06:32.002 "reset": true, 00:06:32.002 "nvme_admin": false, 00:06:32.002 "nvme_io": false, 00:06:32.002 "nvme_io_md": false, 00:06:32.002 "write_zeroes": true, 00:06:32.002 "zcopy": false, 00:06:32.002 "get_zone_info": false, 00:06:32.002 "zone_management": false, 00:06:32.002 "zone_append": false, 00:06:32.002 "compare": false, 00:06:32.002 "compare_and_write": false, 00:06:32.002 "abort": false, 00:06:32.002 "seek_hole": false, 00:06:32.002 "seek_data": false, 00:06:32.002 "copy": false, 00:06:32.002 "nvme_iov_md": false 00:06:32.002 }, 00:06:32.002 "memory_domains": [ 00:06:32.002 { 00:06:32.002 "dma_device_id": "system", 00:06:32.002 "dma_device_type": 1 00:06:32.002 }, 00:06:32.002 { 00:06:32.002 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:32.002 "dma_device_type": 2 00:06:32.002 }, 00:06:32.002 { 00:06:32.002 "dma_device_id": "system", 00:06:32.002 "dma_device_type": 1 00:06:32.002 }, 00:06:32.002 { 00:06:32.002 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:32.002 "dma_device_type": 2 00:06:32.002 } 00:06:32.002 ], 00:06:32.002 "driver_specific": { 00:06:32.002 "raid": { 00:06:32.002 "uuid": "946b0ab5-0a99-4fe2-a02d-a917bc2ae915", 00:06:32.002 "strip_size_kb": 64, 00:06:32.002 "state": "online", 00:06:32.002 "raid_level": "raid0", 00:06:32.002 "superblock": false, 00:06:32.002 "num_base_bdevs": 2, 00:06:32.002 "num_base_bdevs_discovered": 2, 00:06:32.002 "num_base_bdevs_operational": 2, 00:06:32.002 "base_bdevs_list": [ 00:06:32.002 { 00:06:32.002 "name": "BaseBdev1", 00:06:32.002 "uuid": "76cae340-6504-4384-9d03-222a388749a8", 00:06:32.002 "is_configured": true, 00:06:32.002 "data_offset": 0, 00:06:32.002 "data_size": 65536 00:06:32.002 }, 00:06:32.002 { 00:06:32.002 "name": "BaseBdev2", 00:06:32.002 "uuid": "73aa48e3-6520-4d04-8455-2690bf5e1c2f", 00:06:32.002 "is_configured": true, 00:06:32.002 "data_offset": 0, 00:06:32.002 "data_size": 65536 00:06:32.002 } 00:06:32.002 ] 00:06:32.002 } 00:06:32.002 } 00:06:32.002 }' 00:06:32.002 18:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:06:32.002 18:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:06:32.002 BaseBdev2' 00:06:32.002 18:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:32.002 18:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:06:32.002 18:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:32.002 18:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:06:32.002 18:37:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:32.002 18:37:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:32.002 18:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:32.002 18:37:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:32.002 18:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:32.002 18:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:32.002 18:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:32.002 18:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:06:32.003 18:37:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:32.003 18:37:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:32.003 18:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:32.003 18:37:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:32.003 18:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:32.003 18:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:32.003 18:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:06:32.003 18:37:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:32.003 18:37:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:32.003 [2024-12-15 18:37:32.436986] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:06:32.003 [2024-12-15 18:37:32.437023] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:32.003 [2024-12-15 18:37:32.437083] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:32.260 18:37:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:32.260 18:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:06:32.260 18:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:06:32.260 18:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:06:32.260 18:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:06:32.260 18:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:06:32.260 18:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:06:32.260 18:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:32.260 18:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:06:32.260 18:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:32.260 18:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:32.260 18:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:06:32.260 18:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:32.260 18:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:32.260 18:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:32.260 18:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:32.260 18:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:32.260 18:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:32.261 18:37:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:32.261 18:37:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:32.261 18:37:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:32.261 18:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:32.261 "name": "Existed_Raid", 00:06:32.261 "uuid": "946b0ab5-0a99-4fe2-a02d-a917bc2ae915", 00:06:32.261 "strip_size_kb": 64, 00:06:32.261 "state": "offline", 00:06:32.261 "raid_level": "raid0", 00:06:32.261 "superblock": false, 00:06:32.261 "num_base_bdevs": 2, 00:06:32.261 "num_base_bdevs_discovered": 1, 00:06:32.261 "num_base_bdevs_operational": 1, 00:06:32.261 "base_bdevs_list": [ 00:06:32.261 { 00:06:32.261 "name": null, 00:06:32.261 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:32.261 "is_configured": false, 00:06:32.261 "data_offset": 0, 00:06:32.261 "data_size": 65536 00:06:32.261 }, 00:06:32.261 { 00:06:32.261 "name": "BaseBdev2", 00:06:32.261 "uuid": "73aa48e3-6520-4d04-8455-2690bf5e1c2f", 00:06:32.261 "is_configured": true, 00:06:32.261 "data_offset": 0, 00:06:32.261 "data_size": 65536 00:06:32.261 } 00:06:32.261 ] 00:06:32.261 }' 00:06:32.261 18:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:32.261 18:37:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:32.519 18:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:06:32.519 18:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:06:32.519 18:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:32.519 18:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:06:32.519 18:37:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:32.519 18:37:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:32.519 18:37:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:32.519 18:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:06:32.519 18:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:06:32.519 18:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:06:32.519 18:37:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:32.519 18:37:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:32.519 [2024-12-15 18:37:32.932785] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:06:32.519 [2024-12-15 18:37:32.932947] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:06:32.519 18:37:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:32.519 18:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:06:32.519 18:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:06:32.778 18:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:06:32.778 18:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:32.778 18:37:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:32.778 18:37:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:32.778 18:37:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:32.778 18:37:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:06:32.778 18:37:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:06:32.778 18:37:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:06:32.778 18:37:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 74013 00:06:32.778 18:37:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 74013 ']' 00:06:32.778 18:37:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 74013 00:06:32.778 18:37:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:06:32.778 18:37:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:32.778 18:37:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74013 00:06:32.778 killing process with pid 74013 00:06:32.778 18:37:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:32.778 18:37:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:32.778 18:37:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74013' 00:06:32.778 18:37:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 74013 00:06:32.778 [2024-12-15 18:37:33.047516] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:32.778 18:37:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 74013 00:06:32.778 [2024-12-15 18:37:33.049079] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:33.036 18:37:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:06:33.036 00:06:33.036 real 0m3.930s 00:06:33.036 user 0m6.067s 00:06:33.036 sys 0m0.772s 00:06:33.036 18:37:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:33.036 18:37:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:33.036 ************************************ 00:06:33.036 END TEST raid_state_function_test 00:06:33.036 ************************************ 00:06:33.036 18:37:33 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:06:33.036 18:37:33 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:33.036 18:37:33 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:33.036 18:37:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:33.036 ************************************ 00:06:33.036 START TEST raid_state_function_test_sb 00:06:33.036 ************************************ 00:06:33.036 18:37:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 true 00:06:33.036 18:37:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:06:33.036 18:37:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:06:33.036 18:37:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:06:33.036 18:37:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:06:33.036 18:37:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:06:33.036 18:37:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:33.036 18:37:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:06:33.036 18:37:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:06:33.036 18:37:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:33.037 18:37:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:06:33.037 18:37:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:06:33.037 18:37:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:33.037 18:37:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:06:33.037 18:37:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:06:33.037 18:37:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:06:33.037 18:37:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:06:33.037 18:37:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:06:33.037 18:37:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:06:33.037 18:37:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:06:33.037 18:37:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:06:33.037 18:37:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:06:33.037 18:37:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:06:33.037 18:37:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:06:33.037 Process raid pid: 74250 00:06:33.037 18:37:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=74250 00:06:33.037 18:37:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:33.037 18:37:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 74250' 00:06:33.037 18:37:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 74250 00:06:33.037 18:37:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 74250 ']' 00:06:33.037 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:33.037 18:37:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:33.037 18:37:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:33.037 18:37:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:33.037 18:37:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:33.037 18:37:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:33.295 [2024-12-15 18:37:33.529808] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:06:33.295 [2024-12-15 18:37:33.529946] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:33.295 [2024-12-15 18:37:33.680440] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.295 [2024-12-15 18:37:33.719345] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.554 [2024-12-15 18:37:33.794885] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:33.554 [2024-12-15 18:37:33.794925] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:34.121 18:37:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:34.121 18:37:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:06:34.121 18:37:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:34.121 18:37:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:34.121 18:37:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:34.121 [2024-12-15 18:37:34.360592] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:34.121 [2024-12-15 18:37:34.360654] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:34.121 [2024-12-15 18:37:34.360665] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:34.121 [2024-12-15 18:37:34.360675] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:34.121 18:37:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:34.121 18:37:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:06:34.121 18:37:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:34.121 18:37:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:34.121 18:37:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:34.121 18:37:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:34.121 18:37:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:34.121 18:37:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:34.121 18:37:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:34.121 18:37:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:34.121 18:37:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:34.121 18:37:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:34.121 18:37:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:34.121 18:37:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:34.121 18:37:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:34.121 18:37:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:34.121 18:37:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:34.121 "name": "Existed_Raid", 00:06:34.121 "uuid": "3c70db5b-9686-4000-b53f-2bf82a390e91", 00:06:34.121 "strip_size_kb": 64, 00:06:34.121 "state": "configuring", 00:06:34.121 "raid_level": "raid0", 00:06:34.121 "superblock": true, 00:06:34.121 "num_base_bdevs": 2, 00:06:34.121 "num_base_bdevs_discovered": 0, 00:06:34.121 "num_base_bdevs_operational": 2, 00:06:34.121 "base_bdevs_list": [ 00:06:34.121 { 00:06:34.121 "name": "BaseBdev1", 00:06:34.121 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:34.121 "is_configured": false, 00:06:34.121 "data_offset": 0, 00:06:34.121 "data_size": 0 00:06:34.121 }, 00:06:34.121 { 00:06:34.121 "name": "BaseBdev2", 00:06:34.121 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:34.121 "is_configured": false, 00:06:34.121 "data_offset": 0, 00:06:34.121 "data_size": 0 00:06:34.121 } 00:06:34.121 ] 00:06:34.121 }' 00:06:34.121 18:37:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:34.121 18:37:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:34.380 18:37:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:06:34.380 18:37:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:34.380 18:37:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:34.380 [2024-12-15 18:37:34.735906] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:34.380 [2024-12-15 18:37:34.735956] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:06:34.380 18:37:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:34.380 18:37:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:34.380 18:37:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:34.380 18:37:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:34.380 [2024-12-15 18:37:34.743904] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:34.380 [2024-12-15 18:37:34.743946] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:34.380 [2024-12-15 18:37:34.743954] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:34.380 [2024-12-15 18:37:34.743964] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:34.380 18:37:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:34.380 18:37:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:06:34.380 18:37:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:34.380 18:37:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:34.380 [2024-12-15 18:37:34.766938] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:34.380 BaseBdev1 00:06:34.380 18:37:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:34.380 18:37:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:06:34.380 18:37:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:06:34.380 18:37:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:06:34.380 18:37:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:06:34.380 18:37:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:06:34.380 18:37:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:06:34.380 18:37:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:06:34.380 18:37:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:34.380 18:37:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:34.380 18:37:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:34.380 18:37:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:06:34.380 18:37:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:34.380 18:37:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:34.380 [ 00:06:34.380 { 00:06:34.380 "name": "BaseBdev1", 00:06:34.380 "aliases": [ 00:06:34.380 "6c35dcb7-7cde-4b44-8793-18437879289a" 00:06:34.380 ], 00:06:34.380 "product_name": "Malloc disk", 00:06:34.380 "block_size": 512, 00:06:34.380 "num_blocks": 65536, 00:06:34.380 "uuid": "6c35dcb7-7cde-4b44-8793-18437879289a", 00:06:34.380 "assigned_rate_limits": { 00:06:34.380 "rw_ios_per_sec": 0, 00:06:34.380 "rw_mbytes_per_sec": 0, 00:06:34.380 "r_mbytes_per_sec": 0, 00:06:34.380 "w_mbytes_per_sec": 0 00:06:34.380 }, 00:06:34.380 "claimed": true, 00:06:34.380 "claim_type": "exclusive_write", 00:06:34.380 "zoned": false, 00:06:34.380 "supported_io_types": { 00:06:34.380 "read": true, 00:06:34.380 "write": true, 00:06:34.380 "unmap": true, 00:06:34.380 "flush": true, 00:06:34.380 "reset": true, 00:06:34.380 "nvme_admin": false, 00:06:34.380 "nvme_io": false, 00:06:34.380 "nvme_io_md": false, 00:06:34.380 "write_zeroes": true, 00:06:34.380 "zcopy": true, 00:06:34.380 "get_zone_info": false, 00:06:34.380 "zone_management": false, 00:06:34.380 "zone_append": false, 00:06:34.380 "compare": false, 00:06:34.380 "compare_and_write": false, 00:06:34.380 "abort": true, 00:06:34.380 "seek_hole": false, 00:06:34.380 "seek_data": false, 00:06:34.380 "copy": true, 00:06:34.380 "nvme_iov_md": false 00:06:34.380 }, 00:06:34.380 "memory_domains": [ 00:06:34.380 { 00:06:34.380 "dma_device_id": "system", 00:06:34.380 "dma_device_type": 1 00:06:34.380 }, 00:06:34.380 { 00:06:34.380 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:34.380 "dma_device_type": 2 00:06:34.380 } 00:06:34.380 ], 00:06:34.380 "driver_specific": {} 00:06:34.380 } 00:06:34.380 ] 00:06:34.380 18:37:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:34.380 18:37:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:06:34.380 18:37:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:06:34.380 18:37:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:34.381 18:37:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:34.381 18:37:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:34.381 18:37:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:34.381 18:37:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:34.381 18:37:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:34.381 18:37:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:34.381 18:37:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:34.381 18:37:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:34.381 18:37:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:34.381 18:37:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:34.381 18:37:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:34.381 18:37:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:34.639 18:37:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:34.639 18:37:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:34.639 "name": "Existed_Raid", 00:06:34.639 "uuid": "d71c0167-867d-4742-857e-25c68fb3f232", 00:06:34.639 "strip_size_kb": 64, 00:06:34.639 "state": "configuring", 00:06:34.639 "raid_level": "raid0", 00:06:34.639 "superblock": true, 00:06:34.639 "num_base_bdevs": 2, 00:06:34.639 "num_base_bdevs_discovered": 1, 00:06:34.639 "num_base_bdevs_operational": 2, 00:06:34.639 "base_bdevs_list": [ 00:06:34.639 { 00:06:34.639 "name": "BaseBdev1", 00:06:34.639 "uuid": "6c35dcb7-7cde-4b44-8793-18437879289a", 00:06:34.639 "is_configured": true, 00:06:34.639 "data_offset": 2048, 00:06:34.639 "data_size": 63488 00:06:34.639 }, 00:06:34.639 { 00:06:34.639 "name": "BaseBdev2", 00:06:34.639 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:34.639 "is_configured": false, 00:06:34.639 "data_offset": 0, 00:06:34.639 "data_size": 0 00:06:34.639 } 00:06:34.639 ] 00:06:34.639 }' 00:06:34.639 18:37:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:34.639 18:37:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:34.898 18:37:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:06:34.898 18:37:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:34.898 18:37:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:34.898 [2024-12-15 18:37:35.206222] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:34.898 [2024-12-15 18:37:35.206282] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:06:34.898 18:37:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:34.898 18:37:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:34.898 18:37:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:34.898 18:37:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:34.898 [2024-12-15 18:37:35.214216] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:34.898 [2024-12-15 18:37:35.216263] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:34.898 [2024-12-15 18:37:35.216299] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:34.898 18:37:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:34.898 18:37:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:06:34.898 18:37:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:06:34.898 18:37:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:06:34.898 18:37:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:34.898 18:37:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:34.898 18:37:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:34.898 18:37:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:34.898 18:37:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:34.898 18:37:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:34.898 18:37:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:34.898 18:37:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:34.898 18:37:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:34.898 18:37:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:34.898 18:37:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:34.898 18:37:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:34.898 18:37:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:34.898 18:37:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:34.898 18:37:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:34.898 "name": "Existed_Raid", 00:06:34.898 "uuid": "e074e4f7-0468-4aef-a463-cfd283bc34a8", 00:06:34.898 "strip_size_kb": 64, 00:06:34.898 "state": "configuring", 00:06:34.898 "raid_level": "raid0", 00:06:34.898 "superblock": true, 00:06:34.898 "num_base_bdevs": 2, 00:06:34.898 "num_base_bdevs_discovered": 1, 00:06:34.898 "num_base_bdevs_operational": 2, 00:06:34.898 "base_bdevs_list": [ 00:06:34.898 { 00:06:34.898 "name": "BaseBdev1", 00:06:34.898 "uuid": "6c35dcb7-7cde-4b44-8793-18437879289a", 00:06:34.898 "is_configured": true, 00:06:34.898 "data_offset": 2048, 00:06:34.898 "data_size": 63488 00:06:34.898 }, 00:06:34.898 { 00:06:34.898 "name": "BaseBdev2", 00:06:34.898 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:34.898 "is_configured": false, 00:06:34.898 "data_offset": 0, 00:06:34.898 "data_size": 0 00:06:34.898 } 00:06:34.898 ] 00:06:34.898 }' 00:06:34.898 18:37:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:34.898 18:37:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:35.477 18:37:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:06:35.477 18:37:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:35.477 18:37:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:35.477 [2024-12-15 18:37:35.674350] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:06:35.477 [2024-12-15 18:37:35.674581] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:06:35.477 [2024-12-15 18:37:35.674597] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:06:35.477 BaseBdev2 00:06:35.477 [2024-12-15 18:37:35.674913] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:06:35.477 [2024-12-15 18:37:35.675075] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:06:35.477 [2024-12-15 18:37:35.675098] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:06:35.477 [2024-12-15 18:37:35.675213] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:35.477 18:37:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.477 18:37:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:06:35.477 18:37:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:06:35.477 18:37:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:06:35.477 18:37:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:06:35.477 18:37:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:06:35.477 18:37:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:06:35.477 18:37:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:06:35.477 18:37:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:35.477 18:37:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:35.477 18:37:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.477 18:37:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:06:35.477 18:37:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:35.477 18:37:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:35.477 [ 00:06:35.477 { 00:06:35.477 "name": "BaseBdev2", 00:06:35.477 "aliases": [ 00:06:35.477 "ee19f2f5-51f0-44d3-89ab-5908acf15abc" 00:06:35.477 ], 00:06:35.477 "product_name": "Malloc disk", 00:06:35.477 "block_size": 512, 00:06:35.477 "num_blocks": 65536, 00:06:35.477 "uuid": "ee19f2f5-51f0-44d3-89ab-5908acf15abc", 00:06:35.477 "assigned_rate_limits": { 00:06:35.477 "rw_ios_per_sec": 0, 00:06:35.477 "rw_mbytes_per_sec": 0, 00:06:35.477 "r_mbytes_per_sec": 0, 00:06:35.477 "w_mbytes_per_sec": 0 00:06:35.477 }, 00:06:35.477 "claimed": true, 00:06:35.477 "claim_type": "exclusive_write", 00:06:35.477 "zoned": false, 00:06:35.477 "supported_io_types": { 00:06:35.477 "read": true, 00:06:35.477 "write": true, 00:06:35.477 "unmap": true, 00:06:35.477 "flush": true, 00:06:35.477 "reset": true, 00:06:35.477 "nvme_admin": false, 00:06:35.477 "nvme_io": false, 00:06:35.477 "nvme_io_md": false, 00:06:35.477 "write_zeroes": true, 00:06:35.477 "zcopy": true, 00:06:35.477 "get_zone_info": false, 00:06:35.477 "zone_management": false, 00:06:35.477 "zone_append": false, 00:06:35.477 "compare": false, 00:06:35.477 "compare_and_write": false, 00:06:35.477 "abort": true, 00:06:35.477 "seek_hole": false, 00:06:35.477 "seek_data": false, 00:06:35.477 "copy": true, 00:06:35.477 "nvme_iov_md": false 00:06:35.477 }, 00:06:35.477 "memory_domains": [ 00:06:35.477 { 00:06:35.477 "dma_device_id": "system", 00:06:35.477 "dma_device_type": 1 00:06:35.477 }, 00:06:35.477 { 00:06:35.477 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:35.477 "dma_device_type": 2 00:06:35.477 } 00:06:35.477 ], 00:06:35.477 "driver_specific": {} 00:06:35.477 } 00:06:35.477 ] 00:06:35.477 18:37:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.477 18:37:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:06:35.477 18:37:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:06:35.477 18:37:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:06:35.477 18:37:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:06:35.477 18:37:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:35.477 18:37:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:35.477 18:37:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:35.477 18:37:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:35.477 18:37:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:35.477 18:37:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:35.477 18:37:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:35.477 18:37:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:35.477 18:37:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:35.477 18:37:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:35.477 18:37:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:35.477 18:37:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:35.477 18:37:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:35.477 18:37:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.477 18:37:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:35.477 "name": "Existed_Raid", 00:06:35.477 "uuid": "e074e4f7-0468-4aef-a463-cfd283bc34a8", 00:06:35.477 "strip_size_kb": 64, 00:06:35.477 "state": "online", 00:06:35.477 "raid_level": "raid0", 00:06:35.477 "superblock": true, 00:06:35.477 "num_base_bdevs": 2, 00:06:35.477 "num_base_bdevs_discovered": 2, 00:06:35.477 "num_base_bdevs_operational": 2, 00:06:35.477 "base_bdevs_list": [ 00:06:35.477 { 00:06:35.477 "name": "BaseBdev1", 00:06:35.477 "uuid": "6c35dcb7-7cde-4b44-8793-18437879289a", 00:06:35.478 "is_configured": true, 00:06:35.478 "data_offset": 2048, 00:06:35.478 "data_size": 63488 00:06:35.478 }, 00:06:35.478 { 00:06:35.478 "name": "BaseBdev2", 00:06:35.478 "uuid": "ee19f2f5-51f0-44d3-89ab-5908acf15abc", 00:06:35.478 "is_configured": true, 00:06:35.478 "data_offset": 2048, 00:06:35.478 "data_size": 63488 00:06:35.478 } 00:06:35.478 ] 00:06:35.478 }' 00:06:35.478 18:37:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:35.478 18:37:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:35.751 18:37:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:06:35.751 18:37:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:06:35.751 18:37:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:06:35.751 18:37:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:06:35.751 18:37:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:06:35.751 18:37:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:06:35.751 18:37:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:06:35.751 18:37:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:06:35.751 18:37:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:35.751 18:37:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:35.751 [2024-12-15 18:37:36.145910] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:35.751 18:37:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.751 18:37:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:06:35.751 "name": "Existed_Raid", 00:06:35.751 "aliases": [ 00:06:35.751 "e074e4f7-0468-4aef-a463-cfd283bc34a8" 00:06:35.751 ], 00:06:35.751 "product_name": "Raid Volume", 00:06:35.751 "block_size": 512, 00:06:35.751 "num_blocks": 126976, 00:06:35.751 "uuid": "e074e4f7-0468-4aef-a463-cfd283bc34a8", 00:06:35.751 "assigned_rate_limits": { 00:06:35.751 "rw_ios_per_sec": 0, 00:06:35.751 "rw_mbytes_per_sec": 0, 00:06:35.751 "r_mbytes_per_sec": 0, 00:06:35.751 "w_mbytes_per_sec": 0 00:06:35.751 }, 00:06:35.751 "claimed": false, 00:06:35.751 "zoned": false, 00:06:35.751 "supported_io_types": { 00:06:35.751 "read": true, 00:06:35.751 "write": true, 00:06:35.751 "unmap": true, 00:06:35.751 "flush": true, 00:06:35.751 "reset": true, 00:06:35.751 "nvme_admin": false, 00:06:35.751 "nvme_io": false, 00:06:35.751 "nvme_io_md": false, 00:06:35.751 "write_zeroes": true, 00:06:35.751 "zcopy": false, 00:06:35.751 "get_zone_info": false, 00:06:35.751 "zone_management": false, 00:06:35.751 "zone_append": false, 00:06:35.751 "compare": false, 00:06:35.751 "compare_and_write": false, 00:06:35.751 "abort": false, 00:06:35.751 "seek_hole": false, 00:06:35.751 "seek_data": false, 00:06:35.751 "copy": false, 00:06:35.751 "nvme_iov_md": false 00:06:35.751 }, 00:06:35.751 "memory_domains": [ 00:06:35.751 { 00:06:35.751 "dma_device_id": "system", 00:06:35.751 "dma_device_type": 1 00:06:35.751 }, 00:06:35.751 { 00:06:35.751 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:35.751 "dma_device_type": 2 00:06:35.751 }, 00:06:35.751 { 00:06:35.751 "dma_device_id": "system", 00:06:35.751 "dma_device_type": 1 00:06:35.751 }, 00:06:35.751 { 00:06:35.751 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:35.751 "dma_device_type": 2 00:06:35.751 } 00:06:35.751 ], 00:06:35.751 "driver_specific": { 00:06:35.751 "raid": { 00:06:35.751 "uuid": "e074e4f7-0468-4aef-a463-cfd283bc34a8", 00:06:35.751 "strip_size_kb": 64, 00:06:35.751 "state": "online", 00:06:35.751 "raid_level": "raid0", 00:06:35.751 "superblock": true, 00:06:35.751 "num_base_bdevs": 2, 00:06:35.751 "num_base_bdevs_discovered": 2, 00:06:35.751 "num_base_bdevs_operational": 2, 00:06:35.751 "base_bdevs_list": [ 00:06:35.751 { 00:06:35.751 "name": "BaseBdev1", 00:06:35.751 "uuid": "6c35dcb7-7cde-4b44-8793-18437879289a", 00:06:35.751 "is_configured": true, 00:06:35.751 "data_offset": 2048, 00:06:35.751 "data_size": 63488 00:06:35.751 }, 00:06:35.751 { 00:06:35.751 "name": "BaseBdev2", 00:06:35.751 "uuid": "ee19f2f5-51f0-44d3-89ab-5908acf15abc", 00:06:35.751 "is_configured": true, 00:06:35.751 "data_offset": 2048, 00:06:35.751 "data_size": 63488 00:06:35.751 } 00:06:35.751 ] 00:06:35.751 } 00:06:35.751 } 00:06:35.751 }' 00:06:36.033 18:37:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:06:36.033 18:37:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:06:36.033 BaseBdev2' 00:06:36.033 18:37:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:36.033 18:37:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:06:36.033 18:37:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:36.033 18:37:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:06:36.033 18:37:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:36.033 18:37:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:36.033 18:37:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:36.033 18:37:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:36.033 18:37:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:36.033 18:37:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:36.033 18:37:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:36.033 18:37:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:36.033 18:37:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:06:36.033 18:37:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:36.033 18:37:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:36.033 18:37:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:36.033 18:37:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:36.033 18:37:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:36.033 18:37:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:06:36.033 18:37:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:36.033 18:37:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:36.033 [2024-12-15 18:37:36.361272] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:06:36.033 [2024-12-15 18:37:36.361355] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:36.033 [2024-12-15 18:37:36.361427] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:36.033 18:37:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:36.033 18:37:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:06:36.033 18:37:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:06:36.033 18:37:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:06:36.033 18:37:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:06:36.033 18:37:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:06:36.033 18:37:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:06:36.033 18:37:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:36.033 18:37:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:06:36.033 18:37:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:36.033 18:37:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:36.033 18:37:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:06:36.033 18:37:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:36.033 18:37:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:36.033 18:37:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:36.033 18:37:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:36.033 18:37:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:36.033 18:37:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:36.033 18:37:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:36.033 18:37:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:36.033 18:37:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:36.033 18:37:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:36.033 "name": "Existed_Raid", 00:06:36.033 "uuid": "e074e4f7-0468-4aef-a463-cfd283bc34a8", 00:06:36.033 "strip_size_kb": 64, 00:06:36.033 "state": "offline", 00:06:36.033 "raid_level": "raid0", 00:06:36.033 "superblock": true, 00:06:36.033 "num_base_bdevs": 2, 00:06:36.033 "num_base_bdevs_discovered": 1, 00:06:36.033 "num_base_bdevs_operational": 1, 00:06:36.033 "base_bdevs_list": [ 00:06:36.033 { 00:06:36.033 "name": null, 00:06:36.033 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:36.033 "is_configured": false, 00:06:36.033 "data_offset": 0, 00:06:36.033 "data_size": 63488 00:06:36.033 }, 00:06:36.033 { 00:06:36.033 "name": "BaseBdev2", 00:06:36.033 "uuid": "ee19f2f5-51f0-44d3-89ab-5908acf15abc", 00:06:36.033 "is_configured": true, 00:06:36.033 "data_offset": 2048, 00:06:36.033 "data_size": 63488 00:06:36.033 } 00:06:36.033 ] 00:06:36.033 }' 00:06:36.033 18:37:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:36.033 18:37:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:36.602 18:37:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:06:36.602 18:37:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:06:36.602 18:37:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:36.602 18:37:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:36.602 18:37:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:36.602 18:37:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:06:36.602 18:37:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:36.602 18:37:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:06:36.602 18:37:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:06:36.602 18:37:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:06:36.602 18:37:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:36.602 18:37:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:36.602 [2024-12-15 18:37:36.836730] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:06:36.602 [2024-12-15 18:37:36.836845] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:06:36.602 18:37:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:36.602 18:37:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:06:36.602 18:37:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:06:36.602 18:37:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:06:36.602 18:37:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:36.602 18:37:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:36.602 18:37:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:36.602 18:37:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:36.602 18:37:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:06:36.602 18:37:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:06:36.602 18:37:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:06:36.602 18:37:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 74250 00:06:36.602 18:37:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 74250 ']' 00:06:36.602 18:37:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 74250 00:06:36.602 18:37:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:06:36.602 18:37:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:36.603 18:37:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74250 00:06:36.603 killing process with pid 74250 00:06:36.603 18:37:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:36.603 18:37:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:36.603 18:37:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74250' 00:06:36.603 18:37:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 74250 00:06:36.603 [2024-12-15 18:37:36.930468] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:36.603 18:37:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 74250 00:06:36.603 [2024-12-15 18:37:36.931984] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:36.862 ************************************ 00:06:36.862 END TEST raid_state_function_test_sb 00:06:36.862 ************************************ 00:06:36.862 18:37:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:06:36.862 00:06:36.862 real 0m3.817s 00:06:36.862 user 0m5.858s 00:06:36.862 sys 0m0.785s 00:06:36.862 18:37:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:36.862 18:37:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:37.122 18:37:37 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:06:37.122 18:37:37 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:37.122 18:37:37 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:37.122 18:37:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:37.122 ************************************ 00:06:37.122 START TEST raid_superblock_test 00:06:37.122 ************************************ 00:06:37.122 18:37:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 2 00:06:37.122 18:37:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:06:37.122 18:37:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:06:37.122 18:37:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:06:37.122 18:37:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:06:37.122 18:37:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:06:37.122 18:37:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:06:37.122 18:37:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:06:37.122 18:37:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:06:37.122 18:37:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:06:37.122 18:37:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:06:37.122 18:37:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:06:37.122 18:37:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:06:37.122 18:37:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:06:37.122 18:37:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:06:37.122 18:37:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:06:37.122 18:37:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:06:37.122 18:37:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=74485 00:06:37.122 18:37:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:06:37.122 18:37:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 74485 00:06:37.122 18:37:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 74485 ']' 00:06:37.122 18:37:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:37.122 18:37:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:37.122 18:37:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:37.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:37.122 18:37:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:37.122 18:37:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:37.122 [2024-12-15 18:37:37.414931] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:06:37.122 [2024-12-15 18:37:37.415069] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74485 ] 00:06:37.382 [2024-12-15 18:37:37.590259] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.382 [2024-12-15 18:37:37.628333] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.382 [2024-12-15 18:37:37.704176] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:37.382 [2024-12-15 18:37:37.704234] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:37.950 18:37:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:37.950 18:37:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:06:37.950 18:37:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:06:37.950 18:37:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:06:37.950 18:37:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:06:37.950 18:37:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:06:37.950 18:37:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:06:37.950 18:37:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:06:37.950 18:37:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:06:37.950 18:37:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:06:37.950 18:37:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:06:37.950 18:37:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.950 18:37:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:37.950 malloc1 00:06:37.950 18:37:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.950 18:37:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:06:37.950 18:37:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.950 18:37:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:37.950 [2024-12-15 18:37:38.277490] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:06:37.950 [2024-12-15 18:37:38.277645] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:37.950 [2024-12-15 18:37:38.277687] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:06:37.950 [2024-12-15 18:37:38.277724] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:37.950 [2024-12-15 18:37:38.280159] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:37.950 [2024-12-15 18:37:38.280243] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:06:37.950 pt1 00:06:37.950 18:37:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.950 18:37:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:06:37.950 18:37:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:06:37.950 18:37:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:06:37.950 18:37:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:06:37.950 18:37:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:06:37.950 18:37:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:06:37.950 18:37:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:06:37.950 18:37:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:06:37.950 18:37:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:06:37.950 18:37:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.950 18:37:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:37.950 malloc2 00:06:37.950 18:37:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.950 18:37:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:06:37.950 18:37:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.950 18:37:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:37.950 [2024-12-15 18:37:38.316088] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:06:37.950 [2024-12-15 18:37:38.316152] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:37.950 [2024-12-15 18:37:38.316175] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:06:37.950 [2024-12-15 18:37:38.316187] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:37.950 [2024-12-15 18:37:38.318575] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:37.950 [2024-12-15 18:37:38.318613] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:06:37.950 pt2 00:06:37.950 18:37:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.950 18:37:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:06:37.950 18:37:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:06:37.950 18:37:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:06:37.950 18:37:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.950 18:37:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:37.950 [2024-12-15 18:37:38.328093] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:06:37.950 [2024-12-15 18:37:38.330129] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:06:37.950 [2024-12-15 18:37:38.330346] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:06:37.950 [2024-12-15 18:37:38.330365] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:06:37.950 [2024-12-15 18:37:38.330651] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:06:37.950 [2024-12-15 18:37:38.330779] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:06:37.950 [2024-12-15 18:37:38.330788] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:06:37.950 [2024-12-15 18:37:38.331031] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:37.950 18:37:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.950 18:37:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:06:37.950 18:37:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:06:37.950 18:37:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:37.950 18:37:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:37.950 18:37:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:37.951 18:37:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:37.951 18:37:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:37.951 18:37:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:37.951 18:37:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:37.951 18:37:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:37.951 18:37:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:06:37.951 18:37:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:37.951 18:37:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.951 18:37:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:37.951 18:37:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.951 18:37:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:37.951 "name": "raid_bdev1", 00:06:37.951 "uuid": "26bde24c-0593-4e88-8ee8-2c05637bdc67", 00:06:37.951 "strip_size_kb": 64, 00:06:37.951 "state": "online", 00:06:37.951 "raid_level": "raid0", 00:06:37.951 "superblock": true, 00:06:37.951 "num_base_bdevs": 2, 00:06:37.951 "num_base_bdevs_discovered": 2, 00:06:37.951 "num_base_bdevs_operational": 2, 00:06:37.951 "base_bdevs_list": [ 00:06:37.951 { 00:06:37.951 "name": "pt1", 00:06:37.951 "uuid": "00000000-0000-0000-0000-000000000001", 00:06:37.951 "is_configured": true, 00:06:37.951 "data_offset": 2048, 00:06:37.951 "data_size": 63488 00:06:37.951 }, 00:06:37.951 { 00:06:37.951 "name": "pt2", 00:06:37.951 "uuid": "00000000-0000-0000-0000-000000000002", 00:06:37.951 "is_configured": true, 00:06:37.951 "data_offset": 2048, 00:06:37.951 "data_size": 63488 00:06:37.951 } 00:06:37.951 ] 00:06:37.951 }' 00:06:37.951 18:37:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:37.951 18:37:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:38.519 18:37:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:06:38.519 18:37:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:06:38.519 18:37:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:06:38.519 18:37:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:06:38.519 18:37:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:06:38.519 18:37:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:06:38.519 18:37:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:06:38.519 18:37:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:06:38.519 18:37:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.519 18:37:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:38.519 [2024-12-15 18:37:38.807555] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:38.519 18:37:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.519 18:37:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:06:38.519 "name": "raid_bdev1", 00:06:38.519 "aliases": [ 00:06:38.519 "26bde24c-0593-4e88-8ee8-2c05637bdc67" 00:06:38.519 ], 00:06:38.519 "product_name": "Raid Volume", 00:06:38.519 "block_size": 512, 00:06:38.519 "num_blocks": 126976, 00:06:38.519 "uuid": "26bde24c-0593-4e88-8ee8-2c05637bdc67", 00:06:38.519 "assigned_rate_limits": { 00:06:38.519 "rw_ios_per_sec": 0, 00:06:38.519 "rw_mbytes_per_sec": 0, 00:06:38.519 "r_mbytes_per_sec": 0, 00:06:38.519 "w_mbytes_per_sec": 0 00:06:38.519 }, 00:06:38.519 "claimed": false, 00:06:38.519 "zoned": false, 00:06:38.519 "supported_io_types": { 00:06:38.519 "read": true, 00:06:38.519 "write": true, 00:06:38.519 "unmap": true, 00:06:38.519 "flush": true, 00:06:38.519 "reset": true, 00:06:38.519 "nvme_admin": false, 00:06:38.519 "nvme_io": false, 00:06:38.519 "nvme_io_md": false, 00:06:38.519 "write_zeroes": true, 00:06:38.519 "zcopy": false, 00:06:38.519 "get_zone_info": false, 00:06:38.519 "zone_management": false, 00:06:38.519 "zone_append": false, 00:06:38.519 "compare": false, 00:06:38.519 "compare_and_write": false, 00:06:38.519 "abort": false, 00:06:38.519 "seek_hole": false, 00:06:38.519 "seek_data": false, 00:06:38.519 "copy": false, 00:06:38.519 "nvme_iov_md": false 00:06:38.519 }, 00:06:38.519 "memory_domains": [ 00:06:38.519 { 00:06:38.519 "dma_device_id": "system", 00:06:38.519 "dma_device_type": 1 00:06:38.519 }, 00:06:38.519 { 00:06:38.519 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:38.519 "dma_device_type": 2 00:06:38.519 }, 00:06:38.519 { 00:06:38.519 "dma_device_id": "system", 00:06:38.519 "dma_device_type": 1 00:06:38.519 }, 00:06:38.519 { 00:06:38.519 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:38.519 "dma_device_type": 2 00:06:38.519 } 00:06:38.519 ], 00:06:38.519 "driver_specific": { 00:06:38.519 "raid": { 00:06:38.519 "uuid": "26bde24c-0593-4e88-8ee8-2c05637bdc67", 00:06:38.519 "strip_size_kb": 64, 00:06:38.519 "state": "online", 00:06:38.519 "raid_level": "raid0", 00:06:38.519 "superblock": true, 00:06:38.519 "num_base_bdevs": 2, 00:06:38.519 "num_base_bdevs_discovered": 2, 00:06:38.519 "num_base_bdevs_operational": 2, 00:06:38.519 "base_bdevs_list": [ 00:06:38.519 { 00:06:38.519 "name": "pt1", 00:06:38.519 "uuid": "00000000-0000-0000-0000-000000000001", 00:06:38.519 "is_configured": true, 00:06:38.519 "data_offset": 2048, 00:06:38.519 "data_size": 63488 00:06:38.519 }, 00:06:38.519 { 00:06:38.519 "name": "pt2", 00:06:38.519 "uuid": "00000000-0000-0000-0000-000000000002", 00:06:38.519 "is_configured": true, 00:06:38.519 "data_offset": 2048, 00:06:38.519 "data_size": 63488 00:06:38.519 } 00:06:38.519 ] 00:06:38.519 } 00:06:38.519 } 00:06:38.519 }' 00:06:38.519 18:37:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:06:38.519 18:37:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:06:38.519 pt2' 00:06:38.519 18:37:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:38.519 18:37:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:06:38.519 18:37:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:38.519 18:37:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:06:38.519 18:37:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.519 18:37:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:38.519 18:37:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:38.519 18:37:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.779 18:37:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:38.779 18:37:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:38.779 18:37:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:38.779 18:37:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:06:38.779 18:37:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.779 18:37:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:38.779 18:37:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:38.779 18:37:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.779 18:37:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:38.779 18:37:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:38.779 18:37:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:06:38.779 18:37:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:06:38.779 18:37:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.779 18:37:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:38.779 [2024-12-15 18:37:39.011120] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:38.779 18:37:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.779 18:37:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=26bde24c-0593-4e88-8ee8-2c05637bdc67 00:06:38.779 18:37:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 26bde24c-0593-4e88-8ee8-2c05637bdc67 ']' 00:06:38.779 18:37:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:06:38.779 18:37:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.779 18:37:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:38.779 [2024-12-15 18:37:39.054815] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:06:38.779 [2024-12-15 18:37:39.054885] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:38.779 [2024-12-15 18:37:39.054995] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:38.779 [2024-12-15 18:37:39.055087] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:38.779 [2024-12-15 18:37:39.055140] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:06:38.779 18:37:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.779 18:37:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:38.779 18:37:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.779 18:37:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:38.779 18:37:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:06:38.779 18:37:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.779 18:37:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:06:38.779 18:37:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:06:38.779 18:37:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:06:38.779 18:37:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:06:38.779 18:37:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.779 18:37:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:38.779 18:37:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.779 18:37:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:06:38.779 18:37:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:06:38.779 18:37:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.779 18:37:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:38.779 18:37:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.779 18:37:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:06:38.779 18:37:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:06:38.779 18:37:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.779 18:37:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:38.779 18:37:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.779 18:37:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:06:38.779 18:37:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:06:38.779 18:37:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:06:38.779 18:37:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:06:38.779 18:37:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:38.779 18:37:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:38.779 18:37:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:38.779 18:37:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:38.779 18:37:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:06:38.779 18:37:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.779 18:37:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:38.779 [2024-12-15 18:37:39.174625] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:06:38.779 [2024-12-15 18:37:39.176821] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:06:38.779 [2024-12-15 18:37:39.176931] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:06:38.779 [2024-12-15 18:37:39.177009] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:06:38.779 [2024-12-15 18:37:39.177058] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:06:38.779 [2024-12-15 18:37:39.177089] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:06:38.779 request: 00:06:38.779 { 00:06:38.779 "name": "raid_bdev1", 00:06:38.779 "raid_level": "raid0", 00:06:38.779 "base_bdevs": [ 00:06:38.779 "malloc1", 00:06:38.779 "malloc2" 00:06:38.779 ], 00:06:38.779 "strip_size_kb": 64, 00:06:38.779 "superblock": false, 00:06:38.779 "method": "bdev_raid_create", 00:06:38.779 "req_id": 1 00:06:38.779 } 00:06:38.779 Got JSON-RPC error response 00:06:38.779 response: 00:06:38.779 { 00:06:38.779 "code": -17, 00:06:38.779 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:06:38.779 } 00:06:38.779 18:37:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:38.779 18:37:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:06:38.779 18:37:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:38.779 18:37:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:38.779 18:37:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:38.779 18:37:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:38.779 18:37:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:06:38.779 18:37:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.779 18:37:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:38.779 18:37:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:39.039 18:37:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:06:39.039 18:37:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:06:39.039 18:37:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:06:39.039 18:37:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:39.039 18:37:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:39.039 [2024-12-15 18:37:39.238464] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:06:39.039 [2024-12-15 18:37:39.238557] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:39.039 [2024-12-15 18:37:39.238590] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:06:39.039 [2024-12-15 18:37:39.238624] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:39.039 [2024-12-15 18:37:39.241124] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:39.039 [2024-12-15 18:37:39.241193] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:06:39.039 [2024-12-15 18:37:39.241295] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:06:39.039 [2024-12-15 18:37:39.241365] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:06:39.039 pt1 00:06:39.039 18:37:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:39.039 18:37:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:06:39.039 18:37:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:06:39.039 18:37:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:39.039 18:37:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:39.039 18:37:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:39.039 18:37:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:39.039 18:37:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:39.039 18:37:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:39.039 18:37:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:39.039 18:37:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:39.039 18:37:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:39.039 18:37:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:06:39.039 18:37:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:39.039 18:37:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:39.039 18:37:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:39.039 18:37:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:39.039 "name": "raid_bdev1", 00:06:39.039 "uuid": "26bde24c-0593-4e88-8ee8-2c05637bdc67", 00:06:39.039 "strip_size_kb": 64, 00:06:39.039 "state": "configuring", 00:06:39.039 "raid_level": "raid0", 00:06:39.039 "superblock": true, 00:06:39.039 "num_base_bdevs": 2, 00:06:39.039 "num_base_bdevs_discovered": 1, 00:06:39.039 "num_base_bdevs_operational": 2, 00:06:39.039 "base_bdevs_list": [ 00:06:39.039 { 00:06:39.039 "name": "pt1", 00:06:39.039 "uuid": "00000000-0000-0000-0000-000000000001", 00:06:39.039 "is_configured": true, 00:06:39.039 "data_offset": 2048, 00:06:39.039 "data_size": 63488 00:06:39.039 }, 00:06:39.039 { 00:06:39.039 "name": null, 00:06:39.039 "uuid": "00000000-0000-0000-0000-000000000002", 00:06:39.039 "is_configured": false, 00:06:39.039 "data_offset": 2048, 00:06:39.039 "data_size": 63488 00:06:39.039 } 00:06:39.039 ] 00:06:39.039 }' 00:06:39.039 18:37:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:39.039 18:37:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:39.299 18:37:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:06:39.299 18:37:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:06:39.299 18:37:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:06:39.299 18:37:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:06:39.299 18:37:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:39.299 18:37:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:39.299 [2024-12-15 18:37:39.653816] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:06:39.299 [2024-12-15 18:37:39.653884] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:39.299 [2024-12-15 18:37:39.653907] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:06:39.299 [2024-12-15 18:37:39.653917] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:39.299 [2024-12-15 18:37:39.654362] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:39.299 [2024-12-15 18:37:39.654379] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:06:39.299 [2024-12-15 18:37:39.654456] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:06:39.299 [2024-12-15 18:37:39.654475] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:06:39.299 [2024-12-15 18:37:39.654576] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:06:39.299 [2024-12-15 18:37:39.654584] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:06:39.299 [2024-12-15 18:37:39.654846] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:06:39.300 [2024-12-15 18:37:39.654969] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:06:39.300 [2024-12-15 18:37:39.654984] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:06:39.300 [2024-12-15 18:37:39.655083] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:39.300 pt2 00:06:39.300 18:37:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:39.300 18:37:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:06:39.300 18:37:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:06:39.300 18:37:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:06:39.300 18:37:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:06:39.300 18:37:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:39.300 18:37:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:39.300 18:37:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:39.300 18:37:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:39.300 18:37:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:39.300 18:37:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:39.300 18:37:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:39.300 18:37:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:39.300 18:37:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:39.300 18:37:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:39.300 18:37:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:39.300 18:37:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:06:39.300 18:37:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:39.300 18:37:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:39.300 "name": "raid_bdev1", 00:06:39.300 "uuid": "26bde24c-0593-4e88-8ee8-2c05637bdc67", 00:06:39.300 "strip_size_kb": 64, 00:06:39.300 "state": "online", 00:06:39.300 "raid_level": "raid0", 00:06:39.300 "superblock": true, 00:06:39.300 "num_base_bdevs": 2, 00:06:39.300 "num_base_bdevs_discovered": 2, 00:06:39.300 "num_base_bdevs_operational": 2, 00:06:39.300 "base_bdevs_list": [ 00:06:39.300 { 00:06:39.300 "name": "pt1", 00:06:39.300 "uuid": "00000000-0000-0000-0000-000000000001", 00:06:39.300 "is_configured": true, 00:06:39.300 "data_offset": 2048, 00:06:39.300 "data_size": 63488 00:06:39.300 }, 00:06:39.300 { 00:06:39.300 "name": "pt2", 00:06:39.300 "uuid": "00000000-0000-0000-0000-000000000002", 00:06:39.300 "is_configured": true, 00:06:39.300 "data_offset": 2048, 00:06:39.300 "data_size": 63488 00:06:39.300 } 00:06:39.300 ] 00:06:39.300 }' 00:06:39.300 18:37:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:39.300 18:37:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:39.868 18:37:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:06:39.868 18:37:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:06:39.868 18:37:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:06:39.868 18:37:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:06:39.868 18:37:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:06:39.868 18:37:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:06:39.868 18:37:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:06:39.868 18:37:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:06:39.868 18:37:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:39.868 18:37:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:39.868 [2024-12-15 18:37:40.085317] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:39.868 18:37:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:39.868 18:37:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:06:39.868 "name": "raid_bdev1", 00:06:39.868 "aliases": [ 00:06:39.868 "26bde24c-0593-4e88-8ee8-2c05637bdc67" 00:06:39.868 ], 00:06:39.868 "product_name": "Raid Volume", 00:06:39.868 "block_size": 512, 00:06:39.868 "num_blocks": 126976, 00:06:39.868 "uuid": "26bde24c-0593-4e88-8ee8-2c05637bdc67", 00:06:39.868 "assigned_rate_limits": { 00:06:39.868 "rw_ios_per_sec": 0, 00:06:39.868 "rw_mbytes_per_sec": 0, 00:06:39.868 "r_mbytes_per_sec": 0, 00:06:39.868 "w_mbytes_per_sec": 0 00:06:39.868 }, 00:06:39.868 "claimed": false, 00:06:39.868 "zoned": false, 00:06:39.868 "supported_io_types": { 00:06:39.869 "read": true, 00:06:39.869 "write": true, 00:06:39.869 "unmap": true, 00:06:39.869 "flush": true, 00:06:39.869 "reset": true, 00:06:39.869 "nvme_admin": false, 00:06:39.869 "nvme_io": false, 00:06:39.869 "nvme_io_md": false, 00:06:39.869 "write_zeroes": true, 00:06:39.869 "zcopy": false, 00:06:39.869 "get_zone_info": false, 00:06:39.869 "zone_management": false, 00:06:39.869 "zone_append": false, 00:06:39.869 "compare": false, 00:06:39.869 "compare_and_write": false, 00:06:39.869 "abort": false, 00:06:39.869 "seek_hole": false, 00:06:39.869 "seek_data": false, 00:06:39.869 "copy": false, 00:06:39.869 "nvme_iov_md": false 00:06:39.869 }, 00:06:39.869 "memory_domains": [ 00:06:39.869 { 00:06:39.869 "dma_device_id": "system", 00:06:39.869 "dma_device_type": 1 00:06:39.869 }, 00:06:39.869 { 00:06:39.869 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:39.869 "dma_device_type": 2 00:06:39.869 }, 00:06:39.869 { 00:06:39.869 "dma_device_id": "system", 00:06:39.869 "dma_device_type": 1 00:06:39.869 }, 00:06:39.869 { 00:06:39.869 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:39.869 "dma_device_type": 2 00:06:39.869 } 00:06:39.869 ], 00:06:39.869 "driver_specific": { 00:06:39.869 "raid": { 00:06:39.869 "uuid": "26bde24c-0593-4e88-8ee8-2c05637bdc67", 00:06:39.869 "strip_size_kb": 64, 00:06:39.869 "state": "online", 00:06:39.869 "raid_level": "raid0", 00:06:39.869 "superblock": true, 00:06:39.869 "num_base_bdevs": 2, 00:06:39.869 "num_base_bdevs_discovered": 2, 00:06:39.869 "num_base_bdevs_operational": 2, 00:06:39.869 "base_bdevs_list": [ 00:06:39.869 { 00:06:39.869 "name": "pt1", 00:06:39.869 "uuid": "00000000-0000-0000-0000-000000000001", 00:06:39.869 "is_configured": true, 00:06:39.869 "data_offset": 2048, 00:06:39.869 "data_size": 63488 00:06:39.869 }, 00:06:39.869 { 00:06:39.869 "name": "pt2", 00:06:39.869 "uuid": "00000000-0000-0000-0000-000000000002", 00:06:39.869 "is_configured": true, 00:06:39.869 "data_offset": 2048, 00:06:39.869 "data_size": 63488 00:06:39.869 } 00:06:39.869 ] 00:06:39.869 } 00:06:39.869 } 00:06:39.869 }' 00:06:39.869 18:37:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:06:39.869 18:37:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:06:39.869 pt2' 00:06:39.869 18:37:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:39.869 18:37:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:06:39.869 18:37:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:39.869 18:37:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:39.869 18:37:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:06:39.869 18:37:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:39.869 18:37:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:39.869 18:37:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:39.869 18:37:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:39.869 18:37:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:39.869 18:37:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:39.869 18:37:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:06:39.869 18:37:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:39.869 18:37:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:39.869 18:37:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:39.869 18:37:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:39.869 18:37:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:39.869 18:37:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:39.869 18:37:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:06:39.869 18:37:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:39.869 18:37:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:39.869 18:37:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:06:39.869 [2024-12-15 18:37:40.288954] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:40.128 18:37:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:40.128 18:37:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 26bde24c-0593-4e88-8ee8-2c05637bdc67 '!=' 26bde24c-0593-4e88-8ee8-2c05637bdc67 ']' 00:06:40.128 18:37:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:06:40.128 18:37:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:06:40.128 18:37:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:06:40.128 18:37:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 74485 00:06:40.128 18:37:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 74485 ']' 00:06:40.128 18:37:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 74485 00:06:40.128 18:37:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:06:40.128 18:37:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:40.128 18:37:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74485 00:06:40.128 killing process with pid 74485 00:06:40.128 18:37:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:40.128 18:37:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:40.128 18:37:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74485' 00:06:40.128 18:37:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 74485 00:06:40.128 [2024-12-15 18:37:40.360619] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:40.128 [2024-12-15 18:37:40.360701] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:40.128 [2024-12-15 18:37:40.360757] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:40.128 [2024-12-15 18:37:40.360767] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:06:40.128 18:37:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 74485 00:06:40.128 [2024-12-15 18:37:40.402387] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:40.388 18:37:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:06:40.388 00:06:40.388 real 0m3.397s 00:06:40.388 user 0m5.148s 00:06:40.388 sys 0m0.741s 00:06:40.388 18:37:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:40.388 18:37:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:40.388 ************************************ 00:06:40.388 END TEST raid_superblock_test 00:06:40.388 ************************************ 00:06:40.388 18:37:40 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:06:40.388 18:37:40 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:40.388 18:37:40 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:40.388 18:37:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:40.388 ************************************ 00:06:40.388 START TEST raid_read_error_test 00:06:40.388 ************************************ 00:06:40.388 18:37:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 read 00:06:40.388 18:37:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:06:40.388 18:37:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:06:40.388 18:37:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:06:40.388 18:37:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:06:40.388 18:37:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:06:40.388 18:37:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:06:40.388 18:37:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:06:40.388 18:37:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:06:40.388 18:37:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:06:40.388 18:37:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:06:40.388 18:37:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:06:40.388 18:37:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:06:40.388 18:37:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:06:40.388 18:37:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:06:40.388 18:37:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:06:40.388 18:37:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:06:40.388 18:37:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:06:40.388 18:37:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:06:40.388 18:37:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:06:40.388 18:37:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:06:40.388 18:37:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:06:40.388 18:37:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:06:40.388 18:37:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.gsFwEs3eY8 00:06:40.388 18:37:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=74691 00:06:40.388 18:37:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:06:40.388 18:37:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 74691 00:06:40.388 18:37:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 74691 ']' 00:06:40.388 18:37:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:40.388 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:40.388 18:37:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:40.388 18:37:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:40.388 18:37:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:40.388 18:37:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:40.648 [2024-12-15 18:37:40.885518] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:06:40.648 [2024-12-15 18:37:40.885642] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74691 ] 00:06:40.648 [2024-12-15 18:37:41.035641] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.648 [2024-12-15 18:37:41.075741] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.907 [2024-12-15 18:37:41.151236] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:40.907 [2024-12-15 18:37:41.151357] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:41.476 18:37:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:41.476 18:37:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:06:41.476 18:37:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:06:41.476 18:37:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:06:41.476 18:37:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:41.476 18:37:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:41.476 BaseBdev1_malloc 00:06:41.476 18:37:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:41.476 18:37:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:06:41.476 18:37:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:41.476 18:37:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:41.476 true 00:06:41.476 18:37:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:41.476 18:37:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:06:41.476 18:37:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:41.476 18:37:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:41.476 [2024-12-15 18:37:41.759595] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:06:41.476 [2024-12-15 18:37:41.759657] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:41.476 [2024-12-15 18:37:41.759679] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:06:41.476 [2024-12-15 18:37:41.759695] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:41.476 [2024-12-15 18:37:41.762057] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:41.476 [2024-12-15 18:37:41.762095] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:06:41.476 BaseBdev1 00:06:41.476 18:37:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:41.476 18:37:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:06:41.476 18:37:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:06:41.476 18:37:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:41.476 18:37:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:41.476 BaseBdev2_malloc 00:06:41.476 18:37:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:41.476 18:37:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:06:41.476 18:37:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:41.476 18:37:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:41.476 true 00:06:41.476 18:37:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:41.476 18:37:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:06:41.476 18:37:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:41.476 18:37:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:41.477 [2024-12-15 18:37:41.806091] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:06:41.477 [2024-12-15 18:37:41.806140] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:41.477 [2024-12-15 18:37:41.806162] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:06:41.477 [2024-12-15 18:37:41.806171] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:41.477 [2024-12-15 18:37:41.808397] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:41.477 [2024-12-15 18:37:41.808435] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:06:41.477 BaseBdev2 00:06:41.477 18:37:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:41.477 18:37:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:06:41.477 18:37:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:41.477 18:37:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:41.477 [2024-12-15 18:37:41.818147] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:41.477 [2024-12-15 18:37:41.820198] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:06:41.477 [2024-12-15 18:37:41.820380] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:06:41.477 [2024-12-15 18:37:41.820393] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:06:41.477 [2024-12-15 18:37:41.820668] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:06:41.477 [2024-12-15 18:37:41.820828] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:06:41.477 [2024-12-15 18:37:41.820841] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:06:41.477 [2024-12-15 18:37:41.820973] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:41.477 18:37:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:41.477 18:37:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:06:41.477 18:37:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:06:41.477 18:37:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:41.477 18:37:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:41.477 18:37:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:41.477 18:37:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:41.477 18:37:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:41.477 18:37:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:41.477 18:37:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:41.477 18:37:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:41.477 18:37:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:41.477 18:37:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:06:41.477 18:37:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:41.477 18:37:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:41.477 18:37:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:41.477 18:37:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:41.477 "name": "raid_bdev1", 00:06:41.477 "uuid": "5a533106-a7ca-430f-b280-09517a8db688", 00:06:41.477 "strip_size_kb": 64, 00:06:41.477 "state": "online", 00:06:41.477 "raid_level": "raid0", 00:06:41.477 "superblock": true, 00:06:41.477 "num_base_bdevs": 2, 00:06:41.477 "num_base_bdevs_discovered": 2, 00:06:41.477 "num_base_bdevs_operational": 2, 00:06:41.477 "base_bdevs_list": [ 00:06:41.477 { 00:06:41.477 "name": "BaseBdev1", 00:06:41.477 "uuid": "e1ea92f7-284c-53e2-985e-4b052041e31f", 00:06:41.477 "is_configured": true, 00:06:41.477 "data_offset": 2048, 00:06:41.477 "data_size": 63488 00:06:41.477 }, 00:06:41.477 { 00:06:41.477 "name": "BaseBdev2", 00:06:41.477 "uuid": "d596823f-6b0e-5888-936c-eb2f815a5ffd", 00:06:41.477 "is_configured": true, 00:06:41.477 "data_offset": 2048, 00:06:41.477 "data_size": 63488 00:06:41.477 } 00:06:41.477 ] 00:06:41.477 }' 00:06:41.477 18:37:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:41.477 18:37:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:42.045 18:37:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:06:42.045 18:37:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:06:42.045 [2024-12-15 18:37:42.361670] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:06:42.984 18:37:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:06:42.984 18:37:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:42.984 18:37:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:42.984 18:37:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:42.984 18:37:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:06:42.984 18:37:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:06:42.984 18:37:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:06:42.984 18:37:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:06:42.984 18:37:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:06:42.984 18:37:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:42.984 18:37:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:42.984 18:37:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:42.984 18:37:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:42.984 18:37:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:42.984 18:37:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:42.984 18:37:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:42.984 18:37:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:42.984 18:37:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:42.984 18:37:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:06:42.984 18:37:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:42.984 18:37:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:42.984 18:37:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:42.984 18:37:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:42.984 "name": "raid_bdev1", 00:06:42.984 "uuid": "5a533106-a7ca-430f-b280-09517a8db688", 00:06:42.984 "strip_size_kb": 64, 00:06:42.984 "state": "online", 00:06:42.984 "raid_level": "raid0", 00:06:42.984 "superblock": true, 00:06:42.984 "num_base_bdevs": 2, 00:06:42.984 "num_base_bdevs_discovered": 2, 00:06:42.984 "num_base_bdevs_operational": 2, 00:06:42.984 "base_bdevs_list": [ 00:06:42.984 { 00:06:42.984 "name": "BaseBdev1", 00:06:42.984 "uuid": "e1ea92f7-284c-53e2-985e-4b052041e31f", 00:06:42.984 "is_configured": true, 00:06:42.984 "data_offset": 2048, 00:06:42.984 "data_size": 63488 00:06:42.984 }, 00:06:42.984 { 00:06:42.984 "name": "BaseBdev2", 00:06:42.984 "uuid": "d596823f-6b0e-5888-936c-eb2f815a5ffd", 00:06:42.984 "is_configured": true, 00:06:42.984 "data_offset": 2048, 00:06:42.984 "data_size": 63488 00:06:42.984 } 00:06:42.984 ] 00:06:42.984 }' 00:06:42.984 18:37:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:42.984 18:37:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:43.553 18:37:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:06:43.553 18:37:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:43.553 18:37:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:43.553 [2024-12-15 18:37:43.706424] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:06:43.553 [2024-12-15 18:37:43.706474] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:43.553 [2024-12-15 18:37:43.709224] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:43.553 [2024-12-15 18:37:43.709301] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:43.553 [2024-12-15 18:37:43.709361] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:43.553 [2024-12-15 18:37:43.709421] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:06:43.553 18:37:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:43.553 { 00:06:43.553 "results": [ 00:06:43.553 { 00:06:43.553 "job": "raid_bdev1", 00:06:43.553 "core_mask": "0x1", 00:06:43.553 "workload": "randrw", 00:06:43.553 "percentage": 50, 00:06:43.553 "status": "finished", 00:06:43.553 "queue_depth": 1, 00:06:43.553 "io_size": 131072, 00:06:43.553 "runtime": 1.345365, 00:06:43.553 "iops": 14835.379246524177, 00:06:43.553 "mibps": 1854.422405815522, 00:06:43.553 "io_failed": 1, 00:06:43.553 "io_timeout": 0, 00:06:43.553 "avg_latency_us": 94.17195543926282, 00:06:43.553 "min_latency_us": 24.817467248908297, 00:06:43.553 "max_latency_us": 1516.7720524017468 00:06:43.553 } 00:06:43.553 ], 00:06:43.553 "core_count": 1 00:06:43.553 } 00:06:43.553 18:37:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 74691 00:06:43.553 18:37:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 74691 ']' 00:06:43.553 18:37:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 74691 00:06:43.553 18:37:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:06:43.553 18:37:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:43.553 18:37:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74691 00:06:43.553 18:37:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:43.553 18:37:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:43.553 18:37:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74691' 00:06:43.553 killing process with pid 74691 00:06:43.553 18:37:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 74691 00:06:43.553 [2024-12-15 18:37:43.758217] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:43.553 18:37:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 74691 00:06:43.553 [2024-12-15 18:37:43.787865] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:43.813 18:37:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.gsFwEs3eY8 00:06:43.813 18:37:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:06:43.813 18:37:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:06:43.813 18:37:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:06:43.813 18:37:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:06:43.813 18:37:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:06:43.813 18:37:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:06:43.813 ************************************ 00:06:43.813 END TEST raid_read_error_test 00:06:43.813 ************************************ 00:06:43.813 18:37:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:06:43.813 00:06:43.813 real 0m3.339s 00:06:43.813 user 0m4.156s 00:06:43.813 sys 0m0.571s 00:06:43.813 18:37:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:43.813 18:37:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:43.813 18:37:44 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:06:43.813 18:37:44 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:43.813 18:37:44 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:43.813 18:37:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:43.813 ************************************ 00:06:43.813 START TEST raid_write_error_test 00:06:43.813 ************************************ 00:06:43.813 18:37:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 write 00:06:43.813 18:37:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:06:43.813 18:37:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:06:43.813 18:37:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:06:43.813 18:37:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:06:43.813 18:37:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:06:43.813 18:37:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:06:43.813 18:37:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:06:43.813 18:37:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:06:43.813 18:37:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:06:43.813 18:37:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:06:43.813 18:37:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:06:43.813 18:37:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:06:43.813 18:37:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:06:43.813 18:37:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:06:43.813 18:37:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:06:43.813 18:37:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:06:43.813 18:37:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:06:43.813 18:37:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:06:43.813 18:37:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:06:43.814 18:37:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:06:43.814 18:37:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:06:43.814 18:37:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:06:43.814 18:37:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.hmYzvCMKSw 00:06:43.814 18:37:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=74820 00:06:43.814 18:37:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:06:43.814 18:37:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 74820 00:06:43.814 18:37:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 74820 ']' 00:06:43.814 18:37:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:43.814 18:37:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:43.814 18:37:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:43.814 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:43.814 18:37:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:43.814 18:37:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:44.073 [2024-12-15 18:37:44.296028] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:06:44.073 [2024-12-15 18:37:44.296270] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74820 ] 00:06:44.073 [2024-12-15 18:37:44.465494] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.073 [2024-12-15 18:37:44.505804] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.333 [2024-12-15 18:37:44.581815] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:44.333 [2024-12-15 18:37:44.581956] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:44.906 18:37:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:44.906 18:37:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:06:44.906 18:37:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:06:44.906 18:37:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:06:44.906 18:37:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:44.906 18:37:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:44.906 BaseBdev1_malloc 00:06:44.906 18:37:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:44.906 18:37:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:06:44.906 18:37:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:44.906 18:37:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:44.906 true 00:06:44.906 18:37:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:44.906 18:37:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:06:44.906 18:37:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:44.906 18:37:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:44.906 [2024-12-15 18:37:45.186753] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:06:44.906 [2024-12-15 18:37:45.186835] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:44.906 [2024-12-15 18:37:45.186874] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:06:44.906 [2024-12-15 18:37:45.186891] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:44.906 [2024-12-15 18:37:45.189372] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:44.906 [2024-12-15 18:37:45.189411] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:06:44.906 BaseBdev1 00:06:44.906 18:37:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:44.906 18:37:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:06:44.906 18:37:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:06:44.906 18:37:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:44.906 18:37:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:44.906 BaseBdev2_malloc 00:06:44.906 18:37:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:44.906 18:37:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:06:44.906 18:37:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:44.906 18:37:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:44.906 true 00:06:44.906 18:37:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:44.906 18:37:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:06:44.906 18:37:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:44.906 18:37:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:44.906 [2024-12-15 18:37:45.233621] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:06:44.906 [2024-12-15 18:37:45.233755] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:44.906 [2024-12-15 18:37:45.233781] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:06:44.906 [2024-12-15 18:37:45.233790] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:44.906 [2024-12-15 18:37:45.236173] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:44.906 [2024-12-15 18:37:45.236217] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:06:44.906 BaseBdev2 00:06:44.906 18:37:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:44.906 18:37:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:06:44.906 18:37:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:44.906 18:37:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:44.906 [2024-12-15 18:37:45.245671] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:44.906 [2024-12-15 18:37:45.247791] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:06:44.906 [2024-12-15 18:37:45.247971] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:06:44.906 [2024-12-15 18:37:45.247984] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:06:44.906 [2024-12-15 18:37:45.248277] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:06:44.906 [2024-12-15 18:37:45.248439] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:06:44.906 [2024-12-15 18:37:45.248453] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:06:44.906 [2024-12-15 18:37:45.248575] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:44.906 18:37:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:44.906 18:37:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:06:44.906 18:37:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:06:44.906 18:37:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:44.906 18:37:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:44.906 18:37:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:44.906 18:37:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:44.906 18:37:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:44.906 18:37:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:44.906 18:37:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:44.906 18:37:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:44.906 18:37:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:44.906 18:37:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:06:44.906 18:37:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:44.906 18:37:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:44.906 18:37:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:44.906 18:37:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:44.906 "name": "raid_bdev1", 00:06:44.906 "uuid": "ac554751-c989-4ef4-9c49-8d2c28f9b74d", 00:06:44.906 "strip_size_kb": 64, 00:06:44.906 "state": "online", 00:06:44.906 "raid_level": "raid0", 00:06:44.906 "superblock": true, 00:06:44.906 "num_base_bdevs": 2, 00:06:44.906 "num_base_bdevs_discovered": 2, 00:06:44.906 "num_base_bdevs_operational": 2, 00:06:44.906 "base_bdevs_list": [ 00:06:44.906 { 00:06:44.906 "name": "BaseBdev1", 00:06:44.906 "uuid": "12e7cb91-62f3-576a-824b-07b2dfbed598", 00:06:44.906 "is_configured": true, 00:06:44.906 "data_offset": 2048, 00:06:44.906 "data_size": 63488 00:06:44.906 }, 00:06:44.906 { 00:06:44.906 "name": "BaseBdev2", 00:06:44.906 "uuid": "eddc3ba2-03ad-5aa6-8838-e5f6ccb06cad", 00:06:44.906 "is_configured": true, 00:06:44.907 "data_offset": 2048, 00:06:44.907 "data_size": 63488 00:06:44.907 } 00:06:44.907 ] 00:06:44.907 }' 00:06:44.907 18:37:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:44.907 18:37:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:45.474 18:37:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:06:45.474 18:37:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:06:45.474 [2024-12-15 18:37:45.785321] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:06:46.424 18:37:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:06:46.424 18:37:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:46.424 18:37:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:46.424 18:37:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:46.424 18:37:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:06:46.424 18:37:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:06:46.424 18:37:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:06:46.424 18:37:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:06:46.424 18:37:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:06:46.424 18:37:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:46.424 18:37:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:46.424 18:37:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:46.424 18:37:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:46.424 18:37:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:46.424 18:37:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:46.424 18:37:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:46.424 18:37:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:46.424 18:37:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:46.424 18:37:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:06:46.424 18:37:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:46.424 18:37:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:46.424 18:37:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:46.424 18:37:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:46.424 "name": "raid_bdev1", 00:06:46.424 "uuid": "ac554751-c989-4ef4-9c49-8d2c28f9b74d", 00:06:46.424 "strip_size_kb": 64, 00:06:46.424 "state": "online", 00:06:46.424 "raid_level": "raid0", 00:06:46.424 "superblock": true, 00:06:46.424 "num_base_bdevs": 2, 00:06:46.424 "num_base_bdevs_discovered": 2, 00:06:46.424 "num_base_bdevs_operational": 2, 00:06:46.424 "base_bdevs_list": [ 00:06:46.424 { 00:06:46.424 "name": "BaseBdev1", 00:06:46.425 "uuid": "12e7cb91-62f3-576a-824b-07b2dfbed598", 00:06:46.425 "is_configured": true, 00:06:46.425 "data_offset": 2048, 00:06:46.425 "data_size": 63488 00:06:46.425 }, 00:06:46.425 { 00:06:46.425 "name": "BaseBdev2", 00:06:46.425 "uuid": "eddc3ba2-03ad-5aa6-8838-e5f6ccb06cad", 00:06:46.425 "is_configured": true, 00:06:46.425 "data_offset": 2048, 00:06:46.425 "data_size": 63488 00:06:46.425 } 00:06:46.425 ] 00:06:46.425 }' 00:06:46.425 18:37:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:46.425 18:37:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:46.994 18:37:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:06:46.994 18:37:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:46.994 18:37:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:46.994 [2024-12-15 18:37:47.174146] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:06:46.994 [2024-12-15 18:37:47.174292] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:46.994 [2024-12-15 18:37:47.176894] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:46.994 [2024-12-15 18:37:47.176977] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:46.994 [2024-12-15 18:37:47.177035] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:46.994 [2024-12-15 18:37:47.177076] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:06:46.994 { 00:06:46.994 "results": [ 00:06:46.994 { 00:06:46.994 "job": "raid_bdev1", 00:06:46.994 "core_mask": "0x1", 00:06:46.994 "workload": "randrw", 00:06:46.994 "percentage": 50, 00:06:46.994 "status": "finished", 00:06:46.994 "queue_depth": 1, 00:06:46.994 "io_size": 131072, 00:06:46.994 "runtime": 1.389533, 00:06:46.994 "iops": 15187.116822702303, 00:06:46.994 "mibps": 1898.3896028377878, 00:06:46.994 "io_failed": 1, 00:06:46.994 "io_timeout": 0, 00:06:46.994 "avg_latency_us": 91.88184776743002, 00:06:46.994 "min_latency_us": 25.152838427947597, 00:06:46.994 "max_latency_us": 1287.825327510917 00:06:46.994 } 00:06:46.994 ], 00:06:46.994 "core_count": 1 00:06:46.994 } 00:06:46.994 18:37:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:46.994 18:37:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 74820 00:06:46.994 18:37:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 74820 ']' 00:06:46.994 18:37:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 74820 00:06:46.994 18:37:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:06:46.994 18:37:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:46.994 18:37:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74820 00:06:46.994 18:37:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:46.994 18:37:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:46.994 18:37:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74820' 00:06:46.994 killing process with pid 74820 00:06:46.994 18:37:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 74820 00:06:46.994 [2024-12-15 18:37:47.211611] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:46.994 18:37:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 74820 00:06:46.994 [2024-12-15 18:37:47.239796] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:47.254 18:37:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.hmYzvCMKSw 00:06:47.254 18:37:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:06:47.254 18:37:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:06:47.254 18:37:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:06:47.254 18:37:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:06:47.254 18:37:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:06:47.254 18:37:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:06:47.254 18:37:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:06:47.254 00:06:47.254 real 0m3.376s 00:06:47.254 user 0m4.215s 00:06:47.254 sys 0m0.591s 00:06:47.254 18:37:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:47.254 ************************************ 00:06:47.254 END TEST raid_write_error_test 00:06:47.254 ************************************ 00:06:47.254 18:37:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.254 18:37:47 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:06:47.254 18:37:47 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:06:47.254 18:37:47 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:47.254 18:37:47 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:47.254 18:37:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:47.254 ************************************ 00:06:47.254 START TEST raid_state_function_test 00:06:47.254 ************************************ 00:06:47.254 18:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 false 00:06:47.254 18:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:06:47.254 18:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:06:47.254 18:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:06:47.254 18:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:06:47.254 18:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:06:47.254 18:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:47.254 18:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:06:47.254 18:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:06:47.254 18:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:47.254 18:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:06:47.254 18:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:06:47.254 18:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:47.254 18:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:06:47.254 18:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:06:47.254 18:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:06:47.254 18:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:06:47.254 18:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:06:47.254 18:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:06:47.254 18:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:06:47.254 18:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:06:47.254 18:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:06:47.254 18:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:06:47.254 Process raid pid: 74947 00:06:47.254 18:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:06:47.254 18:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=74947 00:06:47.254 18:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:47.254 18:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 74947' 00:06:47.254 18:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 74947 00:06:47.254 18:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 74947 ']' 00:06:47.254 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:47.254 18:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:47.254 18:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:47.254 18:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:47.254 18:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:47.254 18:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.514 [2024-12-15 18:37:47.741275] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:06:47.514 [2024-12-15 18:37:47.741407] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:47.514 [2024-12-15 18:37:47.890572] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.514 [2024-12-15 18:37:47.930218] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.774 [2024-12-15 18:37:48.005936] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:47.774 [2024-12-15 18:37:48.005974] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:48.343 18:37:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:48.343 18:37:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:06:48.343 18:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:48.344 18:37:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:48.344 18:37:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:48.344 [2024-12-15 18:37:48.579684] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:48.344 [2024-12-15 18:37:48.579761] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:48.344 [2024-12-15 18:37:48.579772] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:48.344 [2024-12-15 18:37:48.579782] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:48.344 18:37:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:48.344 18:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:06:48.344 18:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:48.344 18:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:48.344 18:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:06:48.344 18:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:48.344 18:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:48.344 18:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:48.344 18:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:48.344 18:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:48.344 18:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:48.344 18:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:48.344 18:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:48.344 18:37:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:48.344 18:37:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:48.344 18:37:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:48.344 18:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:48.344 "name": "Existed_Raid", 00:06:48.344 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:48.344 "strip_size_kb": 64, 00:06:48.344 "state": "configuring", 00:06:48.344 "raid_level": "concat", 00:06:48.344 "superblock": false, 00:06:48.344 "num_base_bdevs": 2, 00:06:48.344 "num_base_bdevs_discovered": 0, 00:06:48.344 "num_base_bdevs_operational": 2, 00:06:48.344 "base_bdevs_list": [ 00:06:48.344 { 00:06:48.344 "name": "BaseBdev1", 00:06:48.344 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:48.344 "is_configured": false, 00:06:48.344 "data_offset": 0, 00:06:48.344 "data_size": 0 00:06:48.344 }, 00:06:48.344 { 00:06:48.344 "name": "BaseBdev2", 00:06:48.344 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:48.344 "is_configured": false, 00:06:48.344 "data_offset": 0, 00:06:48.344 "data_size": 0 00:06:48.344 } 00:06:48.344 ] 00:06:48.344 }' 00:06:48.344 18:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:48.344 18:37:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:48.604 18:37:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:06:48.604 18:37:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:48.604 18:37:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:48.604 [2024-12-15 18:37:49.030860] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:48.604 [2024-12-15 18:37:49.031015] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:06:48.604 18:37:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:48.604 18:37:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:48.604 18:37:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:48.604 18:37:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:48.604 [2024-12-15 18:37:49.042794] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:48.604 [2024-12-15 18:37:49.042855] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:48.604 [2024-12-15 18:37:49.042864] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:48.604 [2024-12-15 18:37:49.042877] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:48.864 18:37:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:48.864 18:37:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:06:48.864 18:37:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:48.864 18:37:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:48.864 [2024-12-15 18:37:49.069976] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:48.864 BaseBdev1 00:06:48.864 18:37:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:48.864 18:37:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:06:48.864 18:37:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:06:48.864 18:37:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:06:48.864 18:37:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:06:48.864 18:37:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:06:48.864 18:37:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:06:48.864 18:37:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:06:48.864 18:37:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:48.864 18:37:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:48.864 18:37:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:48.864 18:37:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:06:48.864 18:37:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:48.864 18:37:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:48.864 [ 00:06:48.864 { 00:06:48.864 "name": "BaseBdev1", 00:06:48.864 "aliases": [ 00:06:48.864 "c02e7975-2332-4b4b-88dc-d92e2ae5b9f5" 00:06:48.864 ], 00:06:48.864 "product_name": "Malloc disk", 00:06:48.864 "block_size": 512, 00:06:48.864 "num_blocks": 65536, 00:06:48.864 "uuid": "c02e7975-2332-4b4b-88dc-d92e2ae5b9f5", 00:06:48.864 "assigned_rate_limits": { 00:06:48.864 "rw_ios_per_sec": 0, 00:06:48.864 "rw_mbytes_per_sec": 0, 00:06:48.864 "r_mbytes_per_sec": 0, 00:06:48.864 "w_mbytes_per_sec": 0 00:06:48.864 }, 00:06:48.864 "claimed": true, 00:06:48.864 "claim_type": "exclusive_write", 00:06:48.864 "zoned": false, 00:06:48.864 "supported_io_types": { 00:06:48.864 "read": true, 00:06:48.864 "write": true, 00:06:48.864 "unmap": true, 00:06:48.864 "flush": true, 00:06:48.864 "reset": true, 00:06:48.864 "nvme_admin": false, 00:06:48.864 "nvme_io": false, 00:06:48.864 "nvme_io_md": false, 00:06:48.864 "write_zeroes": true, 00:06:48.864 "zcopy": true, 00:06:48.864 "get_zone_info": false, 00:06:48.864 "zone_management": false, 00:06:48.864 "zone_append": false, 00:06:48.864 "compare": false, 00:06:48.864 "compare_and_write": false, 00:06:48.864 "abort": true, 00:06:48.864 "seek_hole": false, 00:06:48.864 "seek_data": false, 00:06:48.864 "copy": true, 00:06:48.864 "nvme_iov_md": false 00:06:48.864 }, 00:06:48.864 "memory_domains": [ 00:06:48.864 { 00:06:48.864 "dma_device_id": "system", 00:06:48.864 "dma_device_type": 1 00:06:48.864 }, 00:06:48.864 { 00:06:48.864 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:48.864 "dma_device_type": 2 00:06:48.864 } 00:06:48.864 ], 00:06:48.864 "driver_specific": {} 00:06:48.864 } 00:06:48.864 ] 00:06:48.864 18:37:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:48.864 18:37:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:06:48.864 18:37:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:06:48.864 18:37:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:48.864 18:37:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:48.864 18:37:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:06:48.864 18:37:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:48.864 18:37:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:48.864 18:37:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:48.864 18:37:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:48.864 18:37:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:48.865 18:37:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:48.865 18:37:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:48.865 18:37:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:48.865 18:37:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:48.865 18:37:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:48.865 18:37:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:48.865 18:37:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:48.865 "name": "Existed_Raid", 00:06:48.865 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:48.865 "strip_size_kb": 64, 00:06:48.865 "state": "configuring", 00:06:48.865 "raid_level": "concat", 00:06:48.865 "superblock": false, 00:06:48.865 "num_base_bdevs": 2, 00:06:48.865 "num_base_bdevs_discovered": 1, 00:06:48.865 "num_base_bdevs_operational": 2, 00:06:48.865 "base_bdevs_list": [ 00:06:48.865 { 00:06:48.865 "name": "BaseBdev1", 00:06:48.865 "uuid": "c02e7975-2332-4b4b-88dc-d92e2ae5b9f5", 00:06:48.865 "is_configured": true, 00:06:48.865 "data_offset": 0, 00:06:48.865 "data_size": 65536 00:06:48.865 }, 00:06:48.865 { 00:06:48.865 "name": "BaseBdev2", 00:06:48.865 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:48.865 "is_configured": false, 00:06:48.865 "data_offset": 0, 00:06:48.865 "data_size": 0 00:06:48.865 } 00:06:48.865 ] 00:06:48.865 }' 00:06:48.865 18:37:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:48.865 18:37:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:49.434 18:37:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:06:49.434 18:37:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:49.434 18:37:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:49.434 [2024-12-15 18:37:49.589188] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:49.434 [2024-12-15 18:37:49.589269] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:06:49.434 18:37:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:49.434 18:37:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:49.434 18:37:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:49.434 18:37:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:49.434 [2024-12-15 18:37:49.601163] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:49.434 [2024-12-15 18:37:49.603285] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:49.434 [2024-12-15 18:37:49.603335] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:49.434 18:37:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:49.434 18:37:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:06:49.434 18:37:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:06:49.434 18:37:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:06:49.434 18:37:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:49.434 18:37:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:49.434 18:37:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:06:49.434 18:37:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:49.434 18:37:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:49.434 18:37:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:49.434 18:37:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:49.434 18:37:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:49.434 18:37:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:49.434 18:37:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:49.434 18:37:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:49.434 18:37:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:49.434 18:37:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:49.434 18:37:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:49.434 18:37:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:49.434 "name": "Existed_Raid", 00:06:49.434 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:49.434 "strip_size_kb": 64, 00:06:49.434 "state": "configuring", 00:06:49.434 "raid_level": "concat", 00:06:49.434 "superblock": false, 00:06:49.434 "num_base_bdevs": 2, 00:06:49.434 "num_base_bdevs_discovered": 1, 00:06:49.434 "num_base_bdevs_operational": 2, 00:06:49.434 "base_bdevs_list": [ 00:06:49.434 { 00:06:49.434 "name": "BaseBdev1", 00:06:49.434 "uuid": "c02e7975-2332-4b4b-88dc-d92e2ae5b9f5", 00:06:49.434 "is_configured": true, 00:06:49.434 "data_offset": 0, 00:06:49.434 "data_size": 65536 00:06:49.434 }, 00:06:49.434 { 00:06:49.434 "name": "BaseBdev2", 00:06:49.434 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:49.434 "is_configured": false, 00:06:49.434 "data_offset": 0, 00:06:49.434 "data_size": 0 00:06:49.434 } 00:06:49.434 ] 00:06:49.434 }' 00:06:49.434 18:37:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:49.434 18:37:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:49.694 18:37:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:06:49.694 18:37:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:49.694 18:37:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:49.694 [2024-12-15 18:37:50.081111] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:06:49.694 [2024-12-15 18:37:50.081269] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:06:49.694 [2024-12-15 18:37:50.081305] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:06:49.694 [2024-12-15 18:37:50.081654] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:06:49.694 [2024-12-15 18:37:50.081876] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:06:49.694 [2024-12-15 18:37:50.081928] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:06:49.694 [2024-12-15 18:37:50.082196] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:49.694 BaseBdev2 00:06:49.694 18:37:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:49.694 18:37:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:06:49.694 18:37:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:06:49.694 18:37:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:06:49.694 18:37:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:06:49.694 18:37:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:06:49.694 18:37:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:06:49.694 18:37:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:06:49.694 18:37:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:49.694 18:37:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:49.694 18:37:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:49.694 18:37:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:06:49.694 18:37:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:49.694 18:37:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:49.694 [ 00:06:49.694 { 00:06:49.694 "name": "BaseBdev2", 00:06:49.694 "aliases": [ 00:06:49.694 "1c5183a6-5be3-4a09-86b9-d95867e76347" 00:06:49.694 ], 00:06:49.694 "product_name": "Malloc disk", 00:06:49.694 "block_size": 512, 00:06:49.694 "num_blocks": 65536, 00:06:49.694 "uuid": "1c5183a6-5be3-4a09-86b9-d95867e76347", 00:06:49.694 "assigned_rate_limits": { 00:06:49.694 "rw_ios_per_sec": 0, 00:06:49.694 "rw_mbytes_per_sec": 0, 00:06:49.694 "r_mbytes_per_sec": 0, 00:06:49.694 "w_mbytes_per_sec": 0 00:06:49.694 }, 00:06:49.694 "claimed": true, 00:06:49.694 "claim_type": "exclusive_write", 00:06:49.694 "zoned": false, 00:06:49.694 "supported_io_types": { 00:06:49.694 "read": true, 00:06:49.694 "write": true, 00:06:49.694 "unmap": true, 00:06:49.694 "flush": true, 00:06:49.694 "reset": true, 00:06:49.694 "nvme_admin": false, 00:06:49.694 "nvme_io": false, 00:06:49.694 "nvme_io_md": false, 00:06:49.694 "write_zeroes": true, 00:06:49.694 "zcopy": true, 00:06:49.694 "get_zone_info": false, 00:06:49.694 "zone_management": false, 00:06:49.694 "zone_append": false, 00:06:49.694 "compare": false, 00:06:49.694 "compare_and_write": false, 00:06:49.694 "abort": true, 00:06:49.694 "seek_hole": false, 00:06:49.694 "seek_data": false, 00:06:49.694 "copy": true, 00:06:49.694 "nvme_iov_md": false 00:06:49.694 }, 00:06:49.694 "memory_domains": [ 00:06:49.694 { 00:06:49.694 "dma_device_id": "system", 00:06:49.694 "dma_device_type": 1 00:06:49.694 }, 00:06:49.694 { 00:06:49.694 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:49.694 "dma_device_type": 2 00:06:49.694 } 00:06:49.694 ], 00:06:49.694 "driver_specific": {} 00:06:49.694 } 00:06:49.694 ] 00:06:49.694 18:37:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:49.694 18:37:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:06:49.694 18:37:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:06:49.694 18:37:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:06:49.694 18:37:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:06:49.694 18:37:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:49.695 18:37:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:49.695 18:37:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:06:49.695 18:37:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:49.695 18:37:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:49.695 18:37:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:49.695 18:37:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:49.695 18:37:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:49.695 18:37:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:49.695 18:37:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:49.695 18:37:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:49.695 18:37:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:49.695 18:37:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:49.955 18:37:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:49.955 18:37:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:49.955 "name": "Existed_Raid", 00:06:49.955 "uuid": "7b67f03e-81fa-4c43-ae53-0d8973849a2d", 00:06:49.955 "strip_size_kb": 64, 00:06:49.955 "state": "online", 00:06:49.955 "raid_level": "concat", 00:06:49.955 "superblock": false, 00:06:49.955 "num_base_bdevs": 2, 00:06:49.955 "num_base_bdevs_discovered": 2, 00:06:49.955 "num_base_bdevs_operational": 2, 00:06:49.955 "base_bdevs_list": [ 00:06:49.955 { 00:06:49.955 "name": "BaseBdev1", 00:06:49.955 "uuid": "c02e7975-2332-4b4b-88dc-d92e2ae5b9f5", 00:06:49.955 "is_configured": true, 00:06:49.955 "data_offset": 0, 00:06:49.955 "data_size": 65536 00:06:49.955 }, 00:06:49.955 { 00:06:49.955 "name": "BaseBdev2", 00:06:49.955 "uuid": "1c5183a6-5be3-4a09-86b9-d95867e76347", 00:06:49.955 "is_configured": true, 00:06:49.955 "data_offset": 0, 00:06:49.955 "data_size": 65536 00:06:49.955 } 00:06:49.955 ] 00:06:49.955 }' 00:06:49.955 18:37:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:49.955 18:37:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:50.214 18:37:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:06:50.214 18:37:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:06:50.214 18:37:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:06:50.214 18:37:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:06:50.214 18:37:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:06:50.214 18:37:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:06:50.214 18:37:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:06:50.214 18:37:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:06:50.214 18:37:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.214 18:37:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:50.214 [2024-12-15 18:37:50.520769] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:50.214 18:37:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:50.214 18:37:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:06:50.214 "name": "Existed_Raid", 00:06:50.214 "aliases": [ 00:06:50.214 "7b67f03e-81fa-4c43-ae53-0d8973849a2d" 00:06:50.214 ], 00:06:50.214 "product_name": "Raid Volume", 00:06:50.214 "block_size": 512, 00:06:50.214 "num_blocks": 131072, 00:06:50.214 "uuid": "7b67f03e-81fa-4c43-ae53-0d8973849a2d", 00:06:50.214 "assigned_rate_limits": { 00:06:50.214 "rw_ios_per_sec": 0, 00:06:50.214 "rw_mbytes_per_sec": 0, 00:06:50.214 "r_mbytes_per_sec": 0, 00:06:50.214 "w_mbytes_per_sec": 0 00:06:50.214 }, 00:06:50.214 "claimed": false, 00:06:50.214 "zoned": false, 00:06:50.214 "supported_io_types": { 00:06:50.214 "read": true, 00:06:50.214 "write": true, 00:06:50.214 "unmap": true, 00:06:50.214 "flush": true, 00:06:50.214 "reset": true, 00:06:50.214 "nvme_admin": false, 00:06:50.214 "nvme_io": false, 00:06:50.214 "nvme_io_md": false, 00:06:50.214 "write_zeroes": true, 00:06:50.214 "zcopy": false, 00:06:50.214 "get_zone_info": false, 00:06:50.214 "zone_management": false, 00:06:50.214 "zone_append": false, 00:06:50.214 "compare": false, 00:06:50.214 "compare_and_write": false, 00:06:50.214 "abort": false, 00:06:50.214 "seek_hole": false, 00:06:50.214 "seek_data": false, 00:06:50.214 "copy": false, 00:06:50.215 "nvme_iov_md": false 00:06:50.215 }, 00:06:50.215 "memory_domains": [ 00:06:50.215 { 00:06:50.215 "dma_device_id": "system", 00:06:50.215 "dma_device_type": 1 00:06:50.215 }, 00:06:50.215 { 00:06:50.215 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:50.215 "dma_device_type": 2 00:06:50.215 }, 00:06:50.215 { 00:06:50.215 "dma_device_id": "system", 00:06:50.215 "dma_device_type": 1 00:06:50.215 }, 00:06:50.215 { 00:06:50.215 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:50.215 "dma_device_type": 2 00:06:50.215 } 00:06:50.215 ], 00:06:50.215 "driver_specific": { 00:06:50.215 "raid": { 00:06:50.215 "uuid": "7b67f03e-81fa-4c43-ae53-0d8973849a2d", 00:06:50.215 "strip_size_kb": 64, 00:06:50.215 "state": "online", 00:06:50.215 "raid_level": "concat", 00:06:50.215 "superblock": false, 00:06:50.215 "num_base_bdevs": 2, 00:06:50.215 "num_base_bdevs_discovered": 2, 00:06:50.215 "num_base_bdevs_operational": 2, 00:06:50.215 "base_bdevs_list": [ 00:06:50.215 { 00:06:50.215 "name": "BaseBdev1", 00:06:50.215 "uuid": "c02e7975-2332-4b4b-88dc-d92e2ae5b9f5", 00:06:50.215 "is_configured": true, 00:06:50.215 "data_offset": 0, 00:06:50.215 "data_size": 65536 00:06:50.215 }, 00:06:50.215 { 00:06:50.215 "name": "BaseBdev2", 00:06:50.215 "uuid": "1c5183a6-5be3-4a09-86b9-d95867e76347", 00:06:50.215 "is_configured": true, 00:06:50.215 "data_offset": 0, 00:06:50.215 "data_size": 65536 00:06:50.215 } 00:06:50.215 ] 00:06:50.215 } 00:06:50.215 } 00:06:50.215 }' 00:06:50.215 18:37:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:06:50.215 18:37:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:06:50.215 BaseBdev2' 00:06:50.215 18:37:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:50.475 18:37:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:06:50.475 18:37:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:50.475 18:37:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:06:50.475 18:37:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:50.475 18:37:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.475 18:37:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:50.475 18:37:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:50.475 18:37:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:50.475 18:37:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:50.475 18:37:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:50.475 18:37:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:06:50.475 18:37:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:50.475 18:37:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.475 18:37:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:50.475 18:37:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:50.475 18:37:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:50.475 18:37:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:50.475 18:37:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:06:50.475 18:37:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.475 18:37:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:50.475 [2024-12-15 18:37:50.764064] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:06:50.475 [2024-12-15 18:37:50.764099] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:50.475 [2024-12-15 18:37:50.764163] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:50.475 18:37:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:50.475 18:37:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:06:50.475 18:37:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:06:50.475 18:37:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:06:50.475 18:37:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:06:50.475 18:37:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:06:50.475 18:37:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:06:50.475 18:37:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:50.475 18:37:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:06:50.475 18:37:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:06:50.475 18:37:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:50.475 18:37:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:06:50.475 18:37:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:50.475 18:37:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:50.475 18:37:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:50.475 18:37:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:50.475 18:37:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:50.475 18:37:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:50.475 18:37:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.475 18:37:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:50.475 18:37:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:50.475 18:37:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:50.475 "name": "Existed_Raid", 00:06:50.475 "uuid": "7b67f03e-81fa-4c43-ae53-0d8973849a2d", 00:06:50.475 "strip_size_kb": 64, 00:06:50.475 "state": "offline", 00:06:50.475 "raid_level": "concat", 00:06:50.475 "superblock": false, 00:06:50.475 "num_base_bdevs": 2, 00:06:50.475 "num_base_bdevs_discovered": 1, 00:06:50.475 "num_base_bdevs_operational": 1, 00:06:50.475 "base_bdevs_list": [ 00:06:50.475 { 00:06:50.475 "name": null, 00:06:50.475 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:50.475 "is_configured": false, 00:06:50.475 "data_offset": 0, 00:06:50.475 "data_size": 65536 00:06:50.475 }, 00:06:50.475 { 00:06:50.475 "name": "BaseBdev2", 00:06:50.475 "uuid": "1c5183a6-5be3-4a09-86b9-d95867e76347", 00:06:50.475 "is_configured": true, 00:06:50.475 "data_offset": 0, 00:06:50.475 "data_size": 65536 00:06:50.475 } 00:06:50.475 ] 00:06:50.475 }' 00:06:50.475 18:37:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:50.475 18:37:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.044 18:37:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:06:51.044 18:37:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:06:51.044 18:37:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:51.044 18:37:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.044 18:37:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.044 18:37:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:06:51.044 18:37:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.044 18:37:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:06:51.044 18:37:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:06:51.044 18:37:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:06:51.044 18:37:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.044 18:37:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.044 [2024-12-15 18:37:51.299695] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:06:51.044 [2024-12-15 18:37:51.299759] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:06:51.044 18:37:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.044 18:37:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:06:51.044 18:37:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:06:51.044 18:37:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:51.044 18:37:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:06:51.044 18:37:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.044 18:37:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.044 18:37:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.044 18:37:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:06:51.044 18:37:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:06:51.044 18:37:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:06:51.044 18:37:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 74947 00:06:51.044 18:37:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 74947 ']' 00:06:51.044 18:37:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 74947 00:06:51.044 18:37:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:06:51.044 18:37:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:51.044 18:37:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74947 00:06:51.044 killing process with pid 74947 00:06:51.044 18:37:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:51.044 18:37:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:51.044 18:37:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74947' 00:06:51.044 18:37:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 74947 00:06:51.044 [2024-12-15 18:37:51.400081] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:51.044 18:37:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 74947 00:06:51.044 [2024-12-15 18:37:51.401657] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:51.304 18:37:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:06:51.304 00:06:51.304 real 0m4.083s 00:06:51.304 user 0m6.315s 00:06:51.304 sys 0m0.852s 00:06:51.304 18:37:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:51.304 18:37:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.304 ************************************ 00:06:51.304 END TEST raid_state_function_test 00:06:51.304 ************************************ 00:06:51.564 18:37:51 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:06:51.564 18:37:51 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:51.564 18:37:51 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:51.564 18:37:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:51.564 ************************************ 00:06:51.564 START TEST raid_state_function_test_sb 00:06:51.564 ************************************ 00:06:51.564 18:37:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 true 00:06:51.564 18:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:06:51.564 18:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:06:51.564 18:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:06:51.564 18:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:06:51.564 18:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:06:51.564 18:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:51.564 18:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:06:51.564 18:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:06:51.564 18:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:51.564 18:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:06:51.564 18:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:06:51.564 18:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:51.564 18:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:06:51.564 18:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:06:51.564 18:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:06:51.564 18:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:06:51.564 18:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:06:51.564 18:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:06:51.564 18:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:06:51.564 18:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:06:51.564 18:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:06:51.564 18:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:06:51.564 18:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:06:51.564 18:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=75189 00:06:51.564 18:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:51.564 18:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 75189' 00:06:51.564 Process raid pid: 75189 00:06:51.564 18:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 75189 00:06:51.564 18:37:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 75189 ']' 00:06:51.564 18:37:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:51.564 18:37:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:51.564 18:37:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:51.564 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:51.564 18:37:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:51.564 18:37:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:51.564 [2024-12-15 18:37:51.897843] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:06:51.564 [2024-12-15 18:37:51.898051] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:51.824 [2024-12-15 18:37:52.057126] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.824 [2024-12-15 18:37:52.094622] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.824 [2024-12-15 18:37:52.170727] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:51.824 [2024-12-15 18:37:52.170866] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:52.393 18:37:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:52.393 18:37:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:06:52.393 18:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:52.393 18:37:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.393 18:37:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:52.393 [2024-12-15 18:37:52.732717] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:52.393 [2024-12-15 18:37:52.732884] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:52.393 [2024-12-15 18:37:52.732921] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:52.393 [2024-12-15 18:37:52.732954] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:52.393 18:37:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.393 18:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:06:52.393 18:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:52.393 18:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:52.393 18:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:06:52.393 18:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:52.393 18:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:52.393 18:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:52.393 18:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:52.393 18:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:52.393 18:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:52.393 18:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:52.393 18:37:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.393 18:37:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:52.393 18:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:52.393 18:37:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.393 18:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:52.393 "name": "Existed_Raid", 00:06:52.393 "uuid": "755bf617-f3d7-44ed-a2af-61460b0d05d3", 00:06:52.393 "strip_size_kb": 64, 00:06:52.393 "state": "configuring", 00:06:52.393 "raid_level": "concat", 00:06:52.393 "superblock": true, 00:06:52.393 "num_base_bdevs": 2, 00:06:52.393 "num_base_bdevs_discovered": 0, 00:06:52.393 "num_base_bdevs_operational": 2, 00:06:52.393 "base_bdevs_list": [ 00:06:52.393 { 00:06:52.393 "name": "BaseBdev1", 00:06:52.393 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:52.393 "is_configured": false, 00:06:52.393 "data_offset": 0, 00:06:52.393 "data_size": 0 00:06:52.393 }, 00:06:52.393 { 00:06:52.393 "name": "BaseBdev2", 00:06:52.393 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:52.393 "is_configured": false, 00:06:52.393 "data_offset": 0, 00:06:52.393 "data_size": 0 00:06:52.393 } 00:06:52.393 ] 00:06:52.393 }' 00:06:52.393 18:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:52.393 18:37:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:52.963 18:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:06:52.963 18:37:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.963 18:37:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:52.963 [2024-12-15 18:37:53.159882] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:52.963 [2024-12-15 18:37:53.159938] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:06:52.963 18:37:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.963 18:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:52.963 18:37:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.963 18:37:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:52.963 [2024-12-15 18:37:53.167878] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:52.963 [2024-12-15 18:37:53.167919] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:52.963 [2024-12-15 18:37:53.167927] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:52.963 [2024-12-15 18:37:53.167940] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:52.963 18:37:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.963 18:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:06:52.963 18:37:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.963 18:37:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:52.963 [2024-12-15 18:37:53.190683] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:52.963 BaseBdev1 00:06:52.963 18:37:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.963 18:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:06:52.963 18:37:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:06:52.963 18:37:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:06:52.963 18:37:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:06:52.963 18:37:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:06:52.963 18:37:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:06:52.963 18:37:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:06:52.963 18:37:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.963 18:37:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:52.963 18:37:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.963 18:37:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:06:52.963 18:37:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.963 18:37:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:52.963 [ 00:06:52.963 { 00:06:52.963 "name": "BaseBdev1", 00:06:52.963 "aliases": [ 00:06:52.963 "aad55739-2b8f-4da2-a2fc-b02a07064d48" 00:06:52.963 ], 00:06:52.963 "product_name": "Malloc disk", 00:06:52.963 "block_size": 512, 00:06:52.963 "num_blocks": 65536, 00:06:52.964 "uuid": "aad55739-2b8f-4da2-a2fc-b02a07064d48", 00:06:52.964 "assigned_rate_limits": { 00:06:52.964 "rw_ios_per_sec": 0, 00:06:52.964 "rw_mbytes_per_sec": 0, 00:06:52.964 "r_mbytes_per_sec": 0, 00:06:52.964 "w_mbytes_per_sec": 0 00:06:52.964 }, 00:06:52.964 "claimed": true, 00:06:52.964 "claim_type": "exclusive_write", 00:06:52.964 "zoned": false, 00:06:52.964 "supported_io_types": { 00:06:52.964 "read": true, 00:06:52.964 "write": true, 00:06:52.964 "unmap": true, 00:06:52.964 "flush": true, 00:06:52.964 "reset": true, 00:06:52.964 "nvme_admin": false, 00:06:52.964 "nvme_io": false, 00:06:52.964 "nvme_io_md": false, 00:06:52.964 "write_zeroes": true, 00:06:52.964 "zcopy": true, 00:06:52.964 "get_zone_info": false, 00:06:52.964 "zone_management": false, 00:06:52.964 "zone_append": false, 00:06:52.964 "compare": false, 00:06:52.964 "compare_and_write": false, 00:06:52.964 "abort": true, 00:06:52.964 "seek_hole": false, 00:06:52.964 "seek_data": false, 00:06:52.964 "copy": true, 00:06:52.964 "nvme_iov_md": false 00:06:52.964 }, 00:06:52.964 "memory_domains": [ 00:06:52.964 { 00:06:52.964 "dma_device_id": "system", 00:06:52.964 "dma_device_type": 1 00:06:52.964 }, 00:06:52.964 { 00:06:52.964 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:52.964 "dma_device_type": 2 00:06:52.964 } 00:06:52.964 ], 00:06:52.964 "driver_specific": {} 00:06:52.964 } 00:06:52.964 ] 00:06:52.964 18:37:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.964 18:37:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:06:52.964 18:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:06:52.964 18:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:52.964 18:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:52.964 18:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:06:52.964 18:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:52.964 18:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:52.964 18:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:52.964 18:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:52.964 18:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:52.964 18:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:52.964 18:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:52.964 18:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:52.964 18:37:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.964 18:37:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:52.964 18:37:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.964 18:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:52.964 "name": "Existed_Raid", 00:06:52.964 "uuid": "980cd203-b141-4f1a-9567-c27bfd9dd002", 00:06:52.964 "strip_size_kb": 64, 00:06:52.964 "state": "configuring", 00:06:52.964 "raid_level": "concat", 00:06:52.964 "superblock": true, 00:06:52.964 "num_base_bdevs": 2, 00:06:52.964 "num_base_bdevs_discovered": 1, 00:06:52.964 "num_base_bdevs_operational": 2, 00:06:52.964 "base_bdevs_list": [ 00:06:52.964 { 00:06:52.964 "name": "BaseBdev1", 00:06:52.964 "uuid": "aad55739-2b8f-4da2-a2fc-b02a07064d48", 00:06:52.964 "is_configured": true, 00:06:52.964 "data_offset": 2048, 00:06:52.964 "data_size": 63488 00:06:52.964 }, 00:06:52.964 { 00:06:52.964 "name": "BaseBdev2", 00:06:52.964 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:52.964 "is_configured": false, 00:06:52.964 "data_offset": 0, 00:06:52.964 "data_size": 0 00:06:52.964 } 00:06:52.964 ] 00:06:52.964 }' 00:06:52.964 18:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:52.964 18:37:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:53.224 18:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:06:53.224 18:37:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:53.224 18:37:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:53.483 [2024-12-15 18:37:53.669918] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:53.483 [2024-12-15 18:37:53.670074] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:06:53.483 18:37:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:53.483 18:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:53.483 18:37:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:53.483 18:37:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:53.483 [2024-12-15 18:37:53.681917] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:53.483 [2024-12-15 18:37:53.684105] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:53.483 [2024-12-15 18:37:53.684191] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:53.483 18:37:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:53.483 18:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:06:53.483 18:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:06:53.483 18:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:06:53.483 18:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:53.483 18:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:53.483 18:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:06:53.483 18:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:53.483 18:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:53.483 18:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:53.483 18:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:53.483 18:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:53.483 18:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:53.483 18:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:53.483 18:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:53.483 18:37:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:53.483 18:37:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:53.483 18:37:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:53.483 18:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:53.483 "name": "Existed_Raid", 00:06:53.483 "uuid": "e428fb4f-ac5c-4742-ae85-40448d01a481", 00:06:53.483 "strip_size_kb": 64, 00:06:53.483 "state": "configuring", 00:06:53.483 "raid_level": "concat", 00:06:53.483 "superblock": true, 00:06:53.483 "num_base_bdevs": 2, 00:06:53.483 "num_base_bdevs_discovered": 1, 00:06:53.483 "num_base_bdevs_operational": 2, 00:06:53.483 "base_bdevs_list": [ 00:06:53.483 { 00:06:53.483 "name": "BaseBdev1", 00:06:53.483 "uuid": "aad55739-2b8f-4da2-a2fc-b02a07064d48", 00:06:53.483 "is_configured": true, 00:06:53.483 "data_offset": 2048, 00:06:53.483 "data_size": 63488 00:06:53.483 }, 00:06:53.483 { 00:06:53.483 "name": "BaseBdev2", 00:06:53.483 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:53.483 "is_configured": false, 00:06:53.483 "data_offset": 0, 00:06:53.483 "data_size": 0 00:06:53.483 } 00:06:53.483 ] 00:06:53.483 }' 00:06:53.483 18:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:53.483 18:37:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:53.743 18:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:06:53.743 18:37:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:53.743 18:37:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:53.743 [2024-12-15 18:37:54.114056] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:06:53.743 [2024-12-15 18:37:54.114288] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:06:53.743 [2024-12-15 18:37:54.114304] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:06:53.743 BaseBdev2 00:06:53.743 [2024-12-15 18:37:54.114615] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:06:53.743 [2024-12-15 18:37:54.114768] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:06:53.743 [2024-12-15 18:37:54.114791] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:06:53.743 [2024-12-15 18:37:54.114926] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:53.743 18:37:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:53.743 18:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:06:53.743 18:37:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:06:53.743 18:37:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:06:53.743 18:37:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:06:53.743 18:37:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:06:53.743 18:37:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:06:53.743 18:37:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:06:53.743 18:37:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:53.743 18:37:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:53.744 18:37:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:53.744 18:37:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:06:53.744 18:37:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:53.744 18:37:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:53.744 [ 00:06:53.744 { 00:06:53.744 "name": "BaseBdev2", 00:06:53.744 "aliases": [ 00:06:53.744 "bc0894ba-173b-4cc0-abcc-b8a5dbfd5a18" 00:06:53.744 ], 00:06:53.744 "product_name": "Malloc disk", 00:06:53.744 "block_size": 512, 00:06:53.744 "num_blocks": 65536, 00:06:53.744 "uuid": "bc0894ba-173b-4cc0-abcc-b8a5dbfd5a18", 00:06:53.744 "assigned_rate_limits": { 00:06:53.744 "rw_ios_per_sec": 0, 00:06:53.744 "rw_mbytes_per_sec": 0, 00:06:53.744 "r_mbytes_per_sec": 0, 00:06:53.744 "w_mbytes_per_sec": 0 00:06:53.744 }, 00:06:53.744 "claimed": true, 00:06:53.744 "claim_type": "exclusive_write", 00:06:53.744 "zoned": false, 00:06:53.744 "supported_io_types": { 00:06:53.744 "read": true, 00:06:53.744 "write": true, 00:06:53.744 "unmap": true, 00:06:53.744 "flush": true, 00:06:53.744 "reset": true, 00:06:53.744 "nvme_admin": false, 00:06:53.744 "nvme_io": false, 00:06:53.744 "nvme_io_md": false, 00:06:53.744 "write_zeroes": true, 00:06:53.744 "zcopy": true, 00:06:53.744 "get_zone_info": false, 00:06:53.744 "zone_management": false, 00:06:53.744 "zone_append": false, 00:06:53.744 "compare": false, 00:06:53.744 "compare_and_write": false, 00:06:53.744 "abort": true, 00:06:53.744 "seek_hole": false, 00:06:53.744 "seek_data": false, 00:06:53.744 "copy": true, 00:06:53.744 "nvme_iov_md": false 00:06:53.744 }, 00:06:53.744 "memory_domains": [ 00:06:53.744 { 00:06:53.744 "dma_device_id": "system", 00:06:53.744 "dma_device_type": 1 00:06:53.744 }, 00:06:53.744 { 00:06:53.744 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:53.744 "dma_device_type": 2 00:06:53.744 } 00:06:53.744 ], 00:06:53.744 "driver_specific": {} 00:06:53.744 } 00:06:53.744 ] 00:06:53.744 18:37:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:53.744 18:37:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:06:53.744 18:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:06:53.744 18:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:06:53.744 18:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:06:53.744 18:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:53.744 18:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:53.744 18:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:06:53.744 18:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:53.744 18:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:53.744 18:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:53.744 18:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:53.744 18:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:53.744 18:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:53.744 18:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:53.744 18:37:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:53.744 18:37:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:53.744 18:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:53.744 18:37:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.004 18:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:54.004 "name": "Existed_Raid", 00:06:54.004 "uuid": "e428fb4f-ac5c-4742-ae85-40448d01a481", 00:06:54.004 "strip_size_kb": 64, 00:06:54.004 "state": "online", 00:06:54.004 "raid_level": "concat", 00:06:54.004 "superblock": true, 00:06:54.004 "num_base_bdevs": 2, 00:06:54.004 "num_base_bdevs_discovered": 2, 00:06:54.004 "num_base_bdevs_operational": 2, 00:06:54.004 "base_bdevs_list": [ 00:06:54.004 { 00:06:54.004 "name": "BaseBdev1", 00:06:54.004 "uuid": "aad55739-2b8f-4da2-a2fc-b02a07064d48", 00:06:54.004 "is_configured": true, 00:06:54.004 "data_offset": 2048, 00:06:54.004 "data_size": 63488 00:06:54.004 }, 00:06:54.004 { 00:06:54.004 "name": "BaseBdev2", 00:06:54.004 "uuid": "bc0894ba-173b-4cc0-abcc-b8a5dbfd5a18", 00:06:54.004 "is_configured": true, 00:06:54.004 "data_offset": 2048, 00:06:54.004 "data_size": 63488 00:06:54.004 } 00:06:54.004 ] 00:06:54.004 }' 00:06:54.004 18:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:54.004 18:37:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:54.264 18:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:06:54.264 18:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:06:54.264 18:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:06:54.264 18:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:06:54.264 18:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:06:54.264 18:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:06:54.264 18:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:06:54.264 18:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:06:54.264 18:37:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.264 18:37:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:54.264 [2024-12-15 18:37:54.585599] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:54.264 18:37:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.264 18:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:06:54.264 "name": "Existed_Raid", 00:06:54.264 "aliases": [ 00:06:54.264 "e428fb4f-ac5c-4742-ae85-40448d01a481" 00:06:54.264 ], 00:06:54.264 "product_name": "Raid Volume", 00:06:54.264 "block_size": 512, 00:06:54.264 "num_blocks": 126976, 00:06:54.264 "uuid": "e428fb4f-ac5c-4742-ae85-40448d01a481", 00:06:54.264 "assigned_rate_limits": { 00:06:54.264 "rw_ios_per_sec": 0, 00:06:54.264 "rw_mbytes_per_sec": 0, 00:06:54.264 "r_mbytes_per_sec": 0, 00:06:54.264 "w_mbytes_per_sec": 0 00:06:54.264 }, 00:06:54.264 "claimed": false, 00:06:54.264 "zoned": false, 00:06:54.264 "supported_io_types": { 00:06:54.264 "read": true, 00:06:54.264 "write": true, 00:06:54.264 "unmap": true, 00:06:54.264 "flush": true, 00:06:54.264 "reset": true, 00:06:54.264 "nvme_admin": false, 00:06:54.264 "nvme_io": false, 00:06:54.264 "nvme_io_md": false, 00:06:54.264 "write_zeroes": true, 00:06:54.264 "zcopy": false, 00:06:54.264 "get_zone_info": false, 00:06:54.264 "zone_management": false, 00:06:54.264 "zone_append": false, 00:06:54.264 "compare": false, 00:06:54.264 "compare_and_write": false, 00:06:54.264 "abort": false, 00:06:54.264 "seek_hole": false, 00:06:54.264 "seek_data": false, 00:06:54.264 "copy": false, 00:06:54.264 "nvme_iov_md": false 00:06:54.264 }, 00:06:54.264 "memory_domains": [ 00:06:54.264 { 00:06:54.264 "dma_device_id": "system", 00:06:54.264 "dma_device_type": 1 00:06:54.264 }, 00:06:54.264 { 00:06:54.264 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:54.264 "dma_device_type": 2 00:06:54.264 }, 00:06:54.264 { 00:06:54.264 "dma_device_id": "system", 00:06:54.264 "dma_device_type": 1 00:06:54.264 }, 00:06:54.264 { 00:06:54.264 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:54.264 "dma_device_type": 2 00:06:54.264 } 00:06:54.264 ], 00:06:54.264 "driver_specific": { 00:06:54.264 "raid": { 00:06:54.264 "uuid": "e428fb4f-ac5c-4742-ae85-40448d01a481", 00:06:54.264 "strip_size_kb": 64, 00:06:54.264 "state": "online", 00:06:54.264 "raid_level": "concat", 00:06:54.264 "superblock": true, 00:06:54.264 "num_base_bdevs": 2, 00:06:54.264 "num_base_bdevs_discovered": 2, 00:06:54.264 "num_base_bdevs_operational": 2, 00:06:54.264 "base_bdevs_list": [ 00:06:54.264 { 00:06:54.264 "name": "BaseBdev1", 00:06:54.264 "uuid": "aad55739-2b8f-4da2-a2fc-b02a07064d48", 00:06:54.264 "is_configured": true, 00:06:54.264 "data_offset": 2048, 00:06:54.264 "data_size": 63488 00:06:54.264 }, 00:06:54.264 { 00:06:54.264 "name": "BaseBdev2", 00:06:54.264 "uuid": "bc0894ba-173b-4cc0-abcc-b8a5dbfd5a18", 00:06:54.264 "is_configured": true, 00:06:54.264 "data_offset": 2048, 00:06:54.264 "data_size": 63488 00:06:54.264 } 00:06:54.264 ] 00:06:54.264 } 00:06:54.264 } 00:06:54.264 }' 00:06:54.264 18:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:06:54.264 18:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:06:54.264 BaseBdev2' 00:06:54.264 18:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:54.264 18:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:06:54.264 18:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:54.264 18:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:06:54.264 18:37:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.264 18:37:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:54.264 18:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:54.264 18:37:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.524 18:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:54.524 18:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:54.524 18:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:54.524 18:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:06:54.524 18:37:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.524 18:37:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:54.524 18:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:54.524 18:37:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.524 18:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:54.524 18:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:54.524 18:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:06:54.524 18:37:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.524 18:37:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:54.524 [2024-12-15 18:37:54.780987] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:06:54.524 [2024-12-15 18:37:54.781026] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:54.524 [2024-12-15 18:37:54.781096] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:54.524 18:37:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.524 18:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:06:54.524 18:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:06:54.524 18:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:06:54.524 18:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:06:54.524 18:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:06:54.524 18:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:06:54.524 18:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:54.524 18:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:06:54.524 18:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:06:54.524 18:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:54.524 18:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:06:54.524 18:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:54.524 18:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:54.524 18:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:54.524 18:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:54.524 18:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:54.524 18:37:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.524 18:37:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:54.524 18:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:54.524 18:37:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.524 18:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:54.524 "name": "Existed_Raid", 00:06:54.524 "uuid": "e428fb4f-ac5c-4742-ae85-40448d01a481", 00:06:54.524 "strip_size_kb": 64, 00:06:54.524 "state": "offline", 00:06:54.524 "raid_level": "concat", 00:06:54.524 "superblock": true, 00:06:54.524 "num_base_bdevs": 2, 00:06:54.524 "num_base_bdevs_discovered": 1, 00:06:54.524 "num_base_bdevs_operational": 1, 00:06:54.524 "base_bdevs_list": [ 00:06:54.524 { 00:06:54.524 "name": null, 00:06:54.524 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:54.524 "is_configured": false, 00:06:54.524 "data_offset": 0, 00:06:54.524 "data_size": 63488 00:06:54.524 }, 00:06:54.524 { 00:06:54.524 "name": "BaseBdev2", 00:06:54.524 "uuid": "bc0894ba-173b-4cc0-abcc-b8a5dbfd5a18", 00:06:54.524 "is_configured": true, 00:06:54.524 "data_offset": 2048, 00:06:54.524 "data_size": 63488 00:06:54.524 } 00:06:54.524 ] 00:06:54.524 }' 00:06:54.524 18:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:54.524 18:37:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:55.094 18:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:06:55.094 18:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:06:55.094 18:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:55.094 18:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:06:55.094 18:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.094 18:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:55.094 18:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.094 18:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:06:55.094 18:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:06:55.094 18:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:06:55.094 18:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.094 18:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:55.094 [2024-12-15 18:37:55.284681] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:06:55.094 [2024-12-15 18:37:55.284835] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:06:55.094 18:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.094 18:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:06:55.094 18:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:06:55.094 18:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:55.094 18:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.094 18:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:55.094 18:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:06:55.094 18:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.094 18:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:06:55.094 18:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:06:55.094 18:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:06:55.094 18:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 75189 00:06:55.094 18:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 75189 ']' 00:06:55.094 18:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 75189 00:06:55.094 18:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:06:55.094 18:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:55.094 18:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75189 00:06:55.094 killing process with pid 75189 00:06:55.094 18:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:55.094 18:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:55.094 18:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75189' 00:06:55.094 18:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 75189 00:06:55.094 [2024-12-15 18:37:55.390233] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:55.094 18:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 75189 00:06:55.094 [2024-12-15 18:37:55.391796] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:55.353 18:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:06:55.354 ************************************ 00:06:55.354 END TEST raid_state_function_test_sb 00:06:55.354 ************************************ 00:06:55.354 00:06:55.354 real 0m3.926s 00:06:55.354 user 0m5.999s 00:06:55.354 sys 0m0.852s 00:06:55.354 18:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:55.354 18:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:55.354 18:37:55 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:06:55.354 18:37:55 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:55.354 18:37:55 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:55.354 18:37:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:55.620 ************************************ 00:06:55.620 START TEST raid_superblock_test 00:06:55.620 ************************************ 00:06:55.620 18:37:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 2 00:06:55.620 18:37:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:06:55.620 18:37:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:06:55.620 18:37:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:06:55.620 18:37:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:06:55.620 18:37:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:06:55.620 18:37:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:06:55.620 18:37:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:06:55.620 18:37:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:06:55.620 18:37:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:06:55.620 18:37:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:06:55.620 18:37:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:06:55.620 18:37:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:06:55.620 18:37:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:06:55.620 18:37:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:06:55.620 18:37:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:06:55.620 18:37:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:06:55.620 18:37:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=75430 00:06:55.620 18:37:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 75430 00:06:55.620 18:37:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:06:55.620 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:55.620 18:37:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 75430 ']' 00:06:55.620 18:37:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:55.620 18:37:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:55.620 18:37:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:55.620 18:37:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:55.620 18:37:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.620 [2024-12-15 18:37:55.892047] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:06:55.620 [2024-12-15 18:37:55.892187] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75430 ] 00:06:55.890 [2024-12-15 18:37:56.066927] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.890 [2024-12-15 18:37:56.106825] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.890 [2024-12-15 18:37:56.184987] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:55.890 [2024-12-15 18:37:56.185043] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:56.460 18:37:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:56.460 18:37:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:06:56.460 18:37:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:06:56.460 18:37:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:06:56.460 18:37:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:06:56.460 18:37:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:06:56.460 18:37:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:06:56.460 18:37:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:06:56.460 18:37:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:06:56.460 18:37:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:06:56.460 18:37:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:06:56.460 18:37:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.460 18:37:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.460 malloc1 00:06:56.460 18:37:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.460 18:37:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:06:56.460 18:37:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.460 18:37:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.460 [2024-12-15 18:37:56.787100] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:06:56.460 [2024-12-15 18:37:56.787245] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:56.460 [2024-12-15 18:37:56.787296] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:06:56.460 [2024-12-15 18:37:56.787342] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:56.460 [2024-12-15 18:37:56.789762] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:56.460 [2024-12-15 18:37:56.789859] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:06:56.460 pt1 00:06:56.460 18:37:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.460 18:37:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:06:56.460 18:37:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:06:56.460 18:37:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:06:56.460 18:37:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:06:56.460 18:37:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:06:56.460 18:37:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:06:56.460 18:37:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:06:56.460 18:37:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:06:56.460 18:37:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:06:56.460 18:37:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.460 18:37:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.460 malloc2 00:06:56.460 18:37:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.460 18:37:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:06:56.460 18:37:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.460 18:37:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.460 [2024-12-15 18:37:56.825655] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:06:56.460 [2024-12-15 18:37:56.825764] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:56.460 [2024-12-15 18:37:56.825809] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:06:56.460 [2024-12-15 18:37:56.825842] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:56.460 [2024-12-15 18:37:56.828136] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:56.460 [2024-12-15 18:37:56.828233] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:06:56.460 pt2 00:06:56.460 18:37:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.460 18:37:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:06:56.460 18:37:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:06:56.460 18:37:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:06:56.460 18:37:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.461 18:37:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.461 [2024-12-15 18:37:56.837666] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:06:56.461 [2024-12-15 18:37:56.839813] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:06:56.461 [2024-12-15 18:37:56.839968] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:06:56.461 [2024-12-15 18:37:56.839984] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:06:56.461 [2024-12-15 18:37:56.840266] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:06:56.461 [2024-12-15 18:37:56.840412] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:06:56.461 [2024-12-15 18:37:56.840421] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:06:56.461 [2024-12-15 18:37:56.840557] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:56.461 18:37:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.461 18:37:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:06:56.461 18:37:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:06:56.461 18:37:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:56.461 18:37:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:06:56.461 18:37:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:56.461 18:37:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:56.461 18:37:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:56.461 18:37:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:56.461 18:37:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:56.461 18:37:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:56.461 18:37:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:56.461 18:37:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:06:56.461 18:37:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.461 18:37:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.461 18:37:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.461 18:37:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:56.461 "name": "raid_bdev1", 00:06:56.461 "uuid": "afd83408-4caf-4601-9ea0-e9105a52ec95", 00:06:56.461 "strip_size_kb": 64, 00:06:56.461 "state": "online", 00:06:56.461 "raid_level": "concat", 00:06:56.461 "superblock": true, 00:06:56.461 "num_base_bdevs": 2, 00:06:56.461 "num_base_bdevs_discovered": 2, 00:06:56.461 "num_base_bdevs_operational": 2, 00:06:56.461 "base_bdevs_list": [ 00:06:56.461 { 00:06:56.461 "name": "pt1", 00:06:56.461 "uuid": "00000000-0000-0000-0000-000000000001", 00:06:56.461 "is_configured": true, 00:06:56.461 "data_offset": 2048, 00:06:56.461 "data_size": 63488 00:06:56.461 }, 00:06:56.461 { 00:06:56.461 "name": "pt2", 00:06:56.461 "uuid": "00000000-0000-0000-0000-000000000002", 00:06:56.461 "is_configured": true, 00:06:56.461 "data_offset": 2048, 00:06:56.461 "data_size": 63488 00:06:56.461 } 00:06:56.461 ] 00:06:56.461 }' 00:06:56.461 18:37:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:56.461 18:37:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.031 18:37:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:06:57.031 18:37:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:06:57.031 18:37:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:06:57.031 18:37:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:06:57.031 18:37:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:06:57.031 18:37:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:06:57.031 18:37:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:06:57.031 18:37:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:06:57.031 18:37:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.031 18:37:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.031 [2024-12-15 18:37:57.297195] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:57.031 18:37:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.031 18:37:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:06:57.031 "name": "raid_bdev1", 00:06:57.031 "aliases": [ 00:06:57.031 "afd83408-4caf-4601-9ea0-e9105a52ec95" 00:06:57.031 ], 00:06:57.031 "product_name": "Raid Volume", 00:06:57.031 "block_size": 512, 00:06:57.031 "num_blocks": 126976, 00:06:57.031 "uuid": "afd83408-4caf-4601-9ea0-e9105a52ec95", 00:06:57.031 "assigned_rate_limits": { 00:06:57.031 "rw_ios_per_sec": 0, 00:06:57.031 "rw_mbytes_per_sec": 0, 00:06:57.031 "r_mbytes_per_sec": 0, 00:06:57.031 "w_mbytes_per_sec": 0 00:06:57.031 }, 00:06:57.031 "claimed": false, 00:06:57.031 "zoned": false, 00:06:57.031 "supported_io_types": { 00:06:57.031 "read": true, 00:06:57.031 "write": true, 00:06:57.031 "unmap": true, 00:06:57.031 "flush": true, 00:06:57.031 "reset": true, 00:06:57.031 "nvme_admin": false, 00:06:57.031 "nvme_io": false, 00:06:57.031 "nvme_io_md": false, 00:06:57.031 "write_zeroes": true, 00:06:57.031 "zcopy": false, 00:06:57.031 "get_zone_info": false, 00:06:57.031 "zone_management": false, 00:06:57.031 "zone_append": false, 00:06:57.031 "compare": false, 00:06:57.031 "compare_and_write": false, 00:06:57.031 "abort": false, 00:06:57.031 "seek_hole": false, 00:06:57.031 "seek_data": false, 00:06:57.031 "copy": false, 00:06:57.031 "nvme_iov_md": false 00:06:57.031 }, 00:06:57.031 "memory_domains": [ 00:06:57.031 { 00:06:57.031 "dma_device_id": "system", 00:06:57.031 "dma_device_type": 1 00:06:57.031 }, 00:06:57.031 { 00:06:57.031 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:57.031 "dma_device_type": 2 00:06:57.031 }, 00:06:57.031 { 00:06:57.031 "dma_device_id": "system", 00:06:57.031 "dma_device_type": 1 00:06:57.031 }, 00:06:57.031 { 00:06:57.031 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:57.031 "dma_device_type": 2 00:06:57.031 } 00:06:57.031 ], 00:06:57.031 "driver_specific": { 00:06:57.031 "raid": { 00:06:57.031 "uuid": "afd83408-4caf-4601-9ea0-e9105a52ec95", 00:06:57.031 "strip_size_kb": 64, 00:06:57.031 "state": "online", 00:06:57.031 "raid_level": "concat", 00:06:57.031 "superblock": true, 00:06:57.031 "num_base_bdevs": 2, 00:06:57.031 "num_base_bdevs_discovered": 2, 00:06:57.031 "num_base_bdevs_operational": 2, 00:06:57.031 "base_bdevs_list": [ 00:06:57.031 { 00:06:57.031 "name": "pt1", 00:06:57.031 "uuid": "00000000-0000-0000-0000-000000000001", 00:06:57.031 "is_configured": true, 00:06:57.031 "data_offset": 2048, 00:06:57.031 "data_size": 63488 00:06:57.031 }, 00:06:57.031 { 00:06:57.031 "name": "pt2", 00:06:57.031 "uuid": "00000000-0000-0000-0000-000000000002", 00:06:57.031 "is_configured": true, 00:06:57.031 "data_offset": 2048, 00:06:57.031 "data_size": 63488 00:06:57.031 } 00:06:57.031 ] 00:06:57.031 } 00:06:57.031 } 00:06:57.031 }' 00:06:57.031 18:37:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:06:57.031 18:37:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:06:57.031 pt2' 00:06:57.031 18:37:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:57.031 18:37:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:06:57.031 18:37:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:57.031 18:37:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:06:57.031 18:37:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:57.031 18:37:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.031 18:37:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.031 18:37:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.031 18:37:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:57.031 18:37:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:57.031 18:37:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:57.031 18:37:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:06:57.031 18:37:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.031 18:37:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:57.031 18:37:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.291 18:37:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.291 18:37:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:57.291 18:37:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:57.291 18:37:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:06:57.291 18:37:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:06:57.291 18:37:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.291 18:37:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.291 [2024-12-15 18:37:57.520659] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:57.291 18:37:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.291 18:37:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=afd83408-4caf-4601-9ea0-e9105a52ec95 00:06:57.291 18:37:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z afd83408-4caf-4601-9ea0-e9105a52ec95 ']' 00:06:57.291 18:37:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:06:57.291 18:37:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.291 18:37:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.291 [2024-12-15 18:37:57.568364] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:06:57.291 [2024-12-15 18:37:57.568394] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:57.291 [2024-12-15 18:37:57.568480] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:57.291 [2024-12-15 18:37:57.568537] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:57.291 [2024-12-15 18:37:57.568550] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:06:57.291 18:37:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.291 18:37:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:57.291 18:37:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:06:57.291 18:37:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.291 18:37:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.291 18:37:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.291 18:37:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:06:57.291 18:37:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:06:57.291 18:37:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:06:57.291 18:37:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:06:57.291 18:37:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.291 18:37:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.291 18:37:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.291 18:37:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:06:57.291 18:37:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:06:57.291 18:37:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.291 18:37:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.291 18:37:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.291 18:37:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:06:57.291 18:37:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:06:57.291 18:37:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.291 18:37:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.291 18:37:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.291 18:37:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:06:57.291 18:37:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:06:57.291 18:37:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:06:57.291 18:37:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:06:57.291 18:37:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:57.291 18:37:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:57.291 18:37:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:57.291 18:37:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:57.291 18:37:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:06:57.291 18:37:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.291 18:37:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.291 [2024-12-15 18:37:57.704177] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:06:57.291 [2024-12-15 18:37:57.706453] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:06:57.291 [2024-12-15 18:37:57.706567] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:06:57.291 [2024-12-15 18:37:57.706650] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:06:57.291 [2024-12-15 18:37:57.706705] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:06:57.291 [2024-12-15 18:37:57.706736] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:06:57.291 request: 00:06:57.291 { 00:06:57.291 "name": "raid_bdev1", 00:06:57.291 "raid_level": "concat", 00:06:57.291 "base_bdevs": [ 00:06:57.291 "malloc1", 00:06:57.291 "malloc2" 00:06:57.291 ], 00:06:57.291 "strip_size_kb": 64, 00:06:57.291 "superblock": false, 00:06:57.291 "method": "bdev_raid_create", 00:06:57.291 "req_id": 1 00:06:57.291 } 00:06:57.291 Got JSON-RPC error response 00:06:57.291 response: 00:06:57.291 { 00:06:57.291 "code": -17, 00:06:57.291 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:06:57.291 } 00:06:57.291 18:37:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:57.291 18:37:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:06:57.291 18:37:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:57.291 18:37:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:57.291 18:37:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:57.291 18:37:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:57.291 18:37:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:06:57.291 18:37:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.291 18:37:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.292 18:37:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.552 18:37:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:06:57.552 18:37:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:06:57.552 18:37:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:06:57.552 18:37:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.552 18:37:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.552 [2024-12-15 18:37:57.768004] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:06:57.552 [2024-12-15 18:37:57.768105] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:57.552 [2024-12-15 18:37:57.768144] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:06:57.552 [2024-12-15 18:37:57.768172] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:57.552 [2024-12-15 18:37:57.770620] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:57.552 [2024-12-15 18:37:57.770689] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:06:57.552 [2024-12-15 18:37:57.770785] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:06:57.552 [2024-12-15 18:37:57.770853] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:06:57.552 pt1 00:06:57.552 18:37:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.552 18:37:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:06:57.552 18:37:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:06:57.552 18:37:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:57.552 18:37:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:06:57.552 18:37:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:57.552 18:37:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:57.552 18:37:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:57.552 18:37:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:57.552 18:37:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:57.552 18:37:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:57.552 18:37:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:57.552 18:37:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:06:57.552 18:37:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.552 18:37:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.552 18:37:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.552 18:37:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:57.552 "name": "raid_bdev1", 00:06:57.552 "uuid": "afd83408-4caf-4601-9ea0-e9105a52ec95", 00:06:57.552 "strip_size_kb": 64, 00:06:57.552 "state": "configuring", 00:06:57.552 "raid_level": "concat", 00:06:57.552 "superblock": true, 00:06:57.552 "num_base_bdevs": 2, 00:06:57.552 "num_base_bdevs_discovered": 1, 00:06:57.552 "num_base_bdevs_operational": 2, 00:06:57.552 "base_bdevs_list": [ 00:06:57.552 { 00:06:57.552 "name": "pt1", 00:06:57.552 "uuid": "00000000-0000-0000-0000-000000000001", 00:06:57.552 "is_configured": true, 00:06:57.552 "data_offset": 2048, 00:06:57.552 "data_size": 63488 00:06:57.552 }, 00:06:57.552 { 00:06:57.552 "name": null, 00:06:57.552 "uuid": "00000000-0000-0000-0000-000000000002", 00:06:57.552 "is_configured": false, 00:06:57.552 "data_offset": 2048, 00:06:57.552 "data_size": 63488 00:06:57.552 } 00:06:57.552 ] 00:06:57.552 }' 00:06:57.552 18:37:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:57.552 18:37:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.812 18:37:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:06:57.812 18:37:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:06:57.812 18:37:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:06:57.812 18:37:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:06:57.812 18:37:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.812 18:37:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.812 [2024-12-15 18:37:58.219277] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:06:57.812 [2024-12-15 18:37:58.219344] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:57.812 [2024-12-15 18:37:58.219369] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:06:57.812 [2024-12-15 18:37:58.219379] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:57.812 [2024-12-15 18:37:58.219846] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:57.812 [2024-12-15 18:37:58.219873] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:06:57.812 [2024-12-15 18:37:58.219954] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:06:57.812 [2024-12-15 18:37:58.219977] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:06:57.812 [2024-12-15 18:37:58.220070] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:06:57.812 [2024-12-15 18:37:58.220084] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:06:57.812 [2024-12-15 18:37:58.220361] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:06:57.812 [2024-12-15 18:37:58.220487] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:06:57.812 [2024-12-15 18:37:58.220502] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:06:57.812 [2024-12-15 18:37:58.220609] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:57.812 pt2 00:06:57.812 18:37:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.812 18:37:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:06:57.812 18:37:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:06:57.812 18:37:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:06:57.812 18:37:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:06:57.812 18:37:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:57.812 18:37:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:06:57.812 18:37:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:57.812 18:37:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:57.812 18:37:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:57.812 18:37:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:57.812 18:37:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:57.812 18:37:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:57.812 18:37:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:57.812 18:37:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:06:57.812 18:37:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.812 18:37:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.812 18:37:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.072 18:37:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:58.072 "name": "raid_bdev1", 00:06:58.072 "uuid": "afd83408-4caf-4601-9ea0-e9105a52ec95", 00:06:58.072 "strip_size_kb": 64, 00:06:58.072 "state": "online", 00:06:58.072 "raid_level": "concat", 00:06:58.072 "superblock": true, 00:06:58.072 "num_base_bdevs": 2, 00:06:58.072 "num_base_bdevs_discovered": 2, 00:06:58.072 "num_base_bdevs_operational": 2, 00:06:58.072 "base_bdevs_list": [ 00:06:58.072 { 00:06:58.072 "name": "pt1", 00:06:58.072 "uuid": "00000000-0000-0000-0000-000000000001", 00:06:58.072 "is_configured": true, 00:06:58.072 "data_offset": 2048, 00:06:58.072 "data_size": 63488 00:06:58.072 }, 00:06:58.072 { 00:06:58.072 "name": "pt2", 00:06:58.072 "uuid": "00000000-0000-0000-0000-000000000002", 00:06:58.072 "is_configured": true, 00:06:58.072 "data_offset": 2048, 00:06:58.072 "data_size": 63488 00:06:58.072 } 00:06:58.072 ] 00:06:58.072 }' 00:06:58.072 18:37:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:58.072 18:37:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.332 18:37:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:06:58.332 18:37:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:06:58.332 18:37:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:06:58.332 18:37:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:06:58.332 18:37:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:06:58.332 18:37:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:06:58.332 18:37:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:06:58.332 18:37:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:06:58.332 18:37:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.332 18:37:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.332 [2024-12-15 18:37:58.706757] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:58.332 18:37:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.332 18:37:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:06:58.332 "name": "raid_bdev1", 00:06:58.332 "aliases": [ 00:06:58.332 "afd83408-4caf-4601-9ea0-e9105a52ec95" 00:06:58.332 ], 00:06:58.332 "product_name": "Raid Volume", 00:06:58.332 "block_size": 512, 00:06:58.332 "num_blocks": 126976, 00:06:58.332 "uuid": "afd83408-4caf-4601-9ea0-e9105a52ec95", 00:06:58.332 "assigned_rate_limits": { 00:06:58.332 "rw_ios_per_sec": 0, 00:06:58.332 "rw_mbytes_per_sec": 0, 00:06:58.332 "r_mbytes_per_sec": 0, 00:06:58.332 "w_mbytes_per_sec": 0 00:06:58.332 }, 00:06:58.332 "claimed": false, 00:06:58.332 "zoned": false, 00:06:58.332 "supported_io_types": { 00:06:58.332 "read": true, 00:06:58.332 "write": true, 00:06:58.332 "unmap": true, 00:06:58.332 "flush": true, 00:06:58.332 "reset": true, 00:06:58.332 "nvme_admin": false, 00:06:58.332 "nvme_io": false, 00:06:58.332 "nvme_io_md": false, 00:06:58.332 "write_zeroes": true, 00:06:58.332 "zcopy": false, 00:06:58.332 "get_zone_info": false, 00:06:58.332 "zone_management": false, 00:06:58.332 "zone_append": false, 00:06:58.332 "compare": false, 00:06:58.332 "compare_and_write": false, 00:06:58.332 "abort": false, 00:06:58.332 "seek_hole": false, 00:06:58.332 "seek_data": false, 00:06:58.332 "copy": false, 00:06:58.332 "nvme_iov_md": false 00:06:58.332 }, 00:06:58.332 "memory_domains": [ 00:06:58.332 { 00:06:58.332 "dma_device_id": "system", 00:06:58.332 "dma_device_type": 1 00:06:58.332 }, 00:06:58.332 { 00:06:58.332 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:58.332 "dma_device_type": 2 00:06:58.332 }, 00:06:58.332 { 00:06:58.332 "dma_device_id": "system", 00:06:58.332 "dma_device_type": 1 00:06:58.332 }, 00:06:58.332 { 00:06:58.332 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:58.332 "dma_device_type": 2 00:06:58.332 } 00:06:58.332 ], 00:06:58.332 "driver_specific": { 00:06:58.332 "raid": { 00:06:58.332 "uuid": "afd83408-4caf-4601-9ea0-e9105a52ec95", 00:06:58.332 "strip_size_kb": 64, 00:06:58.332 "state": "online", 00:06:58.332 "raid_level": "concat", 00:06:58.332 "superblock": true, 00:06:58.332 "num_base_bdevs": 2, 00:06:58.332 "num_base_bdevs_discovered": 2, 00:06:58.332 "num_base_bdevs_operational": 2, 00:06:58.332 "base_bdevs_list": [ 00:06:58.332 { 00:06:58.332 "name": "pt1", 00:06:58.332 "uuid": "00000000-0000-0000-0000-000000000001", 00:06:58.332 "is_configured": true, 00:06:58.332 "data_offset": 2048, 00:06:58.332 "data_size": 63488 00:06:58.332 }, 00:06:58.332 { 00:06:58.332 "name": "pt2", 00:06:58.332 "uuid": "00000000-0000-0000-0000-000000000002", 00:06:58.332 "is_configured": true, 00:06:58.332 "data_offset": 2048, 00:06:58.332 "data_size": 63488 00:06:58.332 } 00:06:58.332 ] 00:06:58.332 } 00:06:58.332 } 00:06:58.332 }' 00:06:58.332 18:37:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:06:58.592 18:37:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:06:58.592 pt2' 00:06:58.592 18:37:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:58.592 18:37:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:06:58.592 18:37:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:58.592 18:37:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:06:58.592 18:37:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:58.592 18:37:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.592 18:37:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.592 18:37:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.592 18:37:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:58.592 18:37:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:58.592 18:37:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:58.592 18:37:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:58.592 18:37:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:06:58.592 18:37:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.592 18:37:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.592 18:37:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.592 18:37:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:58.592 18:37:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:58.592 18:37:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:06:58.592 18:37:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.592 18:37:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.592 18:37:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:06:58.592 [2024-12-15 18:37:58.938205] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:58.592 18:37:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.592 18:37:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' afd83408-4caf-4601-9ea0-e9105a52ec95 '!=' afd83408-4caf-4601-9ea0-e9105a52ec95 ']' 00:06:58.592 18:37:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:06:58.592 18:37:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:06:58.592 18:37:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:06:58.593 18:37:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 75430 00:06:58.593 18:37:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 75430 ']' 00:06:58.593 18:37:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 75430 00:06:58.593 18:37:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:06:58.593 18:37:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:58.593 18:37:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75430 00:06:58.593 18:37:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:58.593 18:37:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:58.593 18:37:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75430' 00:06:58.593 killing process with pid 75430 00:06:58.593 18:37:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 75430 00:06:58.593 [2024-12-15 18:37:59.028560] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:58.593 [2024-12-15 18:37:59.028732] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:58.593 18:37:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 75430 00:06:58.593 [2024-12-15 18:37:59.028834] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:58.593 [2024-12-15 18:37:59.028846] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:06:58.852 [2024-12-15 18:37:59.071073] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:59.112 18:37:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:06:59.112 00:06:59.112 real 0m3.598s 00:06:59.112 user 0m5.473s 00:06:59.112 sys 0m0.798s 00:06:59.112 18:37:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:59.112 18:37:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.112 ************************************ 00:06:59.112 END TEST raid_superblock_test 00:06:59.112 ************************************ 00:06:59.112 18:37:59 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:06:59.112 18:37:59 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:59.112 18:37:59 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:59.112 18:37:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:59.112 ************************************ 00:06:59.112 START TEST raid_read_error_test 00:06:59.112 ************************************ 00:06:59.112 18:37:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 read 00:06:59.112 18:37:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:06:59.112 18:37:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:06:59.112 18:37:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:06:59.112 18:37:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:06:59.112 18:37:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:06:59.112 18:37:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:06:59.112 18:37:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:06:59.112 18:37:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:06:59.112 18:37:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:06:59.112 18:37:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:06:59.113 18:37:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:06:59.113 18:37:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:06:59.113 18:37:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:06:59.113 18:37:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:06:59.113 18:37:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:06:59.113 18:37:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:06:59.113 18:37:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:06:59.113 18:37:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:06:59.113 18:37:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:06:59.113 18:37:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:06:59.113 18:37:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:06:59.113 18:37:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:06:59.113 18:37:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.ESMBwNGWui 00:06:59.113 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:59.113 18:37:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75631 00:06:59.113 18:37:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75631 00:06:59.113 18:37:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:06:59.113 18:37:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 75631 ']' 00:06:59.113 18:37:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:59.113 18:37:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:59.113 18:37:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:59.113 18:37:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:59.113 18:37:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.372 [2024-12-15 18:37:59.573720] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:06:59.372 [2024-12-15 18:37:59.573942] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75631 ] 00:06:59.372 [2024-12-15 18:37:59.724829] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.372 [2024-12-15 18:37:59.765920] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.632 [2024-12-15 18:37:59.842451] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:59.632 [2024-12-15 18:37:59.842569] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:00.203 18:38:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:00.203 18:38:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:07:00.203 18:38:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:00.203 18:38:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:00.203 18:38:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:00.203 18:38:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.203 BaseBdev1_malloc 00:07:00.203 18:38:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:00.203 18:38:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:00.203 18:38:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:00.203 18:38:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.203 true 00:07:00.203 18:38:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:00.203 18:38:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:00.203 18:38:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:00.203 18:38:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.203 [2024-12-15 18:38:00.439359] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:00.203 [2024-12-15 18:38:00.439438] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:00.203 [2024-12-15 18:38:00.439469] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:00.203 [2024-12-15 18:38:00.439481] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:00.203 [2024-12-15 18:38:00.441925] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:00.203 [2024-12-15 18:38:00.442045] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:00.203 BaseBdev1 00:07:00.203 18:38:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:00.203 18:38:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:00.203 18:38:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:00.203 18:38:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:00.203 18:38:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.203 BaseBdev2_malloc 00:07:00.203 18:38:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:00.203 18:38:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:00.203 18:38:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:00.203 18:38:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.203 true 00:07:00.203 18:38:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:00.203 18:38:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:00.203 18:38:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:00.203 18:38:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.203 [2024-12-15 18:38:00.486360] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:00.203 [2024-12-15 18:38:00.486421] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:00.203 [2024-12-15 18:38:00.486444] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:00.203 [2024-12-15 18:38:00.486453] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:00.203 [2024-12-15 18:38:00.488783] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:00.203 [2024-12-15 18:38:00.488845] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:00.203 BaseBdev2 00:07:00.203 18:38:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:00.203 18:38:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:00.203 18:38:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:00.203 18:38:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.203 [2024-12-15 18:38:00.498402] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:00.203 [2024-12-15 18:38:00.500476] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:00.203 [2024-12-15 18:38:00.500737] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:07:00.203 [2024-12-15 18:38:00.500755] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:00.203 [2024-12-15 18:38:00.501045] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:07:00.203 [2024-12-15 18:38:00.501205] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:07:00.203 [2024-12-15 18:38:00.501228] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:07:00.203 [2024-12-15 18:38:00.501366] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:00.203 18:38:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:00.203 18:38:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:00.203 18:38:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:00.203 18:38:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:00.203 18:38:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:00.203 18:38:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:00.203 18:38:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:00.203 18:38:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:00.203 18:38:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:00.203 18:38:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:00.203 18:38:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:00.203 18:38:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:00.203 18:38:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:00.203 18:38:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:00.203 18:38:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.203 18:38:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:00.203 18:38:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:00.203 "name": "raid_bdev1", 00:07:00.203 "uuid": "c0c43e09-bfb0-401f-bb32-15e67630acae", 00:07:00.203 "strip_size_kb": 64, 00:07:00.203 "state": "online", 00:07:00.203 "raid_level": "concat", 00:07:00.203 "superblock": true, 00:07:00.203 "num_base_bdevs": 2, 00:07:00.203 "num_base_bdevs_discovered": 2, 00:07:00.203 "num_base_bdevs_operational": 2, 00:07:00.203 "base_bdevs_list": [ 00:07:00.203 { 00:07:00.203 "name": "BaseBdev1", 00:07:00.203 "uuid": "3898e11b-ff51-5f91-9d23-02f82cc9eea0", 00:07:00.203 "is_configured": true, 00:07:00.203 "data_offset": 2048, 00:07:00.203 "data_size": 63488 00:07:00.204 }, 00:07:00.204 { 00:07:00.204 "name": "BaseBdev2", 00:07:00.204 "uuid": "c8e627fc-567e-5f23-a2e0-7bdc3a3b5d49", 00:07:00.204 "is_configured": true, 00:07:00.204 "data_offset": 2048, 00:07:00.204 "data_size": 63488 00:07:00.204 } 00:07:00.204 ] 00:07:00.204 }' 00:07:00.204 18:38:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:00.204 18:38:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.773 18:38:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:00.773 18:38:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:00.773 [2024-12-15 18:38:01.029989] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:07:01.713 18:38:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:07:01.713 18:38:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.713 18:38:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.713 18:38:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.713 18:38:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:01.713 18:38:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:07:01.713 18:38:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:01.713 18:38:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:01.713 18:38:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:01.713 18:38:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:01.713 18:38:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:01.713 18:38:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:01.713 18:38:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:01.713 18:38:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:01.713 18:38:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:01.713 18:38:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:01.713 18:38:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:01.713 18:38:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:01.713 18:38:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:01.713 18:38:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.713 18:38:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.713 18:38:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.713 18:38:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:01.713 "name": "raid_bdev1", 00:07:01.713 "uuid": "c0c43e09-bfb0-401f-bb32-15e67630acae", 00:07:01.713 "strip_size_kb": 64, 00:07:01.713 "state": "online", 00:07:01.713 "raid_level": "concat", 00:07:01.713 "superblock": true, 00:07:01.714 "num_base_bdevs": 2, 00:07:01.714 "num_base_bdevs_discovered": 2, 00:07:01.714 "num_base_bdevs_operational": 2, 00:07:01.714 "base_bdevs_list": [ 00:07:01.714 { 00:07:01.714 "name": "BaseBdev1", 00:07:01.714 "uuid": "3898e11b-ff51-5f91-9d23-02f82cc9eea0", 00:07:01.714 "is_configured": true, 00:07:01.714 "data_offset": 2048, 00:07:01.714 "data_size": 63488 00:07:01.714 }, 00:07:01.714 { 00:07:01.714 "name": "BaseBdev2", 00:07:01.714 "uuid": "c8e627fc-567e-5f23-a2e0-7bdc3a3b5d49", 00:07:01.714 "is_configured": true, 00:07:01.714 "data_offset": 2048, 00:07:01.714 "data_size": 63488 00:07:01.714 } 00:07:01.714 ] 00:07:01.714 }' 00:07:01.714 18:38:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:01.714 18:38:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.973 18:38:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:01.973 18:38:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.973 18:38:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.233 [2024-12-15 18:38:02.414587] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:02.233 [2024-12-15 18:38:02.414635] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:02.233 [2024-12-15 18:38:02.417195] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:02.233 [2024-12-15 18:38:02.417245] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:02.233 [2024-12-15 18:38:02.417300] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:02.233 [2024-12-15 18:38:02.417310] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:07:02.233 { 00:07:02.233 "results": [ 00:07:02.233 { 00:07:02.233 "job": "raid_bdev1", 00:07:02.233 "core_mask": "0x1", 00:07:02.233 "workload": "randrw", 00:07:02.233 "percentage": 50, 00:07:02.233 "status": "finished", 00:07:02.233 "queue_depth": 1, 00:07:02.233 "io_size": 131072, 00:07:02.233 "runtime": 1.385208, 00:07:02.233 "iops": 15173.894462059128, 00:07:02.233 "mibps": 1896.736807757391, 00:07:02.233 "io_failed": 1, 00:07:02.233 "io_timeout": 0, 00:07:02.233 "avg_latency_us": 92.03978809950183, 00:07:02.233 "min_latency_us": 24.705676855895195, 00:07:02.233 "max_latency_us": 1280.6707423580785 00:07:02.233 } 00:07:02.233 ], 00:07:02.233 "core_count": 1 00:07:02.233 } 00:07:02.233 18:38:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:02.233 18:38:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75631 00:07:02.233 18:38:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 75631 ']' 00:07:02.233 18:38:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 75631 00:07:02.233 18:38:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:07:02.233 18:38:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:02.233 18:38:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75631 00:07:02.233 18:38:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:02.233 killing process with pid 75631 00:07:02.233 18:38:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:02.233 18:38:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75631' 00:07:02.233 18:38:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 75631 00:07:02.233 [2024-12-15 18:38:02.458350] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:02.233 18:38:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 75631 00:07:02.233 [2024-12-15 18:38:02.487307] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:02.493 18:38:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.ESMBwNGWui 00:07:02.493 18:38:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:02.493 18:38:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:02.493 18:38:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:07:02.493 18:38:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:07:02.493 ************************************ 00:07:02.493 END TEST raid_read_error_test 00:07:02.493 ************************************ 00:07:02.493 18:38:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:02.493 18:38:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:02.493 18:38:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:07:02.493 00:07:02.493 real 0m3.356s 00:07:02.493 user 0m4.180s 00:07:02.493 sys 0m0.581s 00:07:02.493 18:38:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:02.493 18:38:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.493 18:38:02 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:07:02.493 18:38:02 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:02.493 18:38:02 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:02.493 18:38:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:02.493 ************************************ 00:07:02.493 START TEST raid_write_error_test 00:07:02.493 ************************************ 00:07:02.493 18:38:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 write 00:07:02.493 18:38:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:07:02.493 18:38:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:02.493 18:38:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:07:02.493 18:38:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:02.493 18:38:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:02.493 18:38:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:02.493 18:38:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:02.493 18:38:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:02.494 18:38:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:02.494 18:38:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:02.494 18:38:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:02.494 18:38:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:02.494 18:38:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:02.494 18:38:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:02.494 18:38:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:02.494 18:38:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:02.494 18:38:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:02.494 18:38:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:02.494 18:38:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:07:02.494 18:38:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:02.494 18:38:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:02.494 18:38:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:02.494 18:38:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.K93vqt3Qei 00:07:02.494 18:38:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75760 00:07:02.494 18:38:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:02.494 18:38:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75760 00:07:02.494 18:38:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 75760 ']' 00:07:02.494 18:38:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:02.494 18:38:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:02.494 18:38:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:02.494 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:02.494 18:38:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:02.494 18:38:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.754 [2024-12-15 18:38:03.004898] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:07:02.754 [2024-12-15 18:38:03.005115] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75760 ] 00:07:02.754 [2024-12-15 18:38:03.177872] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.013 [2024-12-15 18:38:03.215988] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.013 [2024-12-15 18:38:03.292859] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:03.013 [2024-12-15 18:38:03.292985] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:03.583 18:38:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:03.583 18:38:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:07:03.583 18:38:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:03.583 18:38:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:03.583 18:38:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.583 18:38:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.583 BaseBdev1_malloc 00:07:03.583 18:38:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.583 18:38:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:03.584 18:38:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.584 18:38:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.584 true 00:07:03.584 18:38:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.584 18:38:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:03.584 18:38:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.584 18:38:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.584 [2024-12-15 18:38:03.874659] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:03.584 [2024-12-15 18:38:03.874736] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:03.584 [2024-12-15 18:38:03.874762] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:03.584 [2024-12-15 18:38:03.874772] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:03.584 [2024-12-15 18:38:03.877321] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:03.584 [2024-12-15 18:38:03.877362] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:03.584 BaseBdev1 00:07:03.584 18:38:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.584 18:38:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:03.584 18:38:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:03.584 18:38:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.584 18:38:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.584 BaseBdev2_malloc 00:07:03.584 18:38:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.584 18:38:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:03.584 18:38:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.584 18:38:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.584 true 00:07:03.584 18:38:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.584 18:38:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:03.584 18:38:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.584 18:38:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.584 [2024-12-15 18:38:03.921788] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:03.584 [2024-12-15 18:38:03.921863] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:03.584 [2024-12-15 18:38:03.921887] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:03.584 [2024-12-15 18:38:03.921897] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:03.584 [2024-12-15 18:38:03.924303] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:03.584 [2024-12-15 18:38:03.924342] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:03.584 BaseBdev2 00:07:03.584 18:38:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.584 18:38:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:03.584 18:38:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.584 18:38:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.584 [2024-12-15 18:38:03.933844] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:03.584 [2024-12-15 18:38:03.935995] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:03.584 [2024-12-15 18:38:03.936165] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:07:03.584 [2024-12-15 18:38:03.936207] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:03.584 [2024-12-15 18:38:03.936477] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:07:03.584 [2024-12-15 18:38:03.936635] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:07:03.584 [2024-12-15 18:38:03.936649] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:07:03.584 [2024-12-15 18:38:03.936784] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:03.584 18:38:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.584 18:38:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:03.584 18:38:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:03.584 18:38:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:03.584 18:38:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:03.584 18:38:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:03.584 18:38:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:03.584 18:38:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:03.584 18:38:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:03.584 18:38:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:03.584 18:38:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:03.584 18:38:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:03.584 18:38:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:03.584 18:38:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.584 18:38:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.584 18:38:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.584 18:38:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:03.584 "name": "raid_bdev1", 00:07:03.584 "uuid": "50b6fbeb-6d8d-476e-886f-7211568455d9", 00:07:03.584 "strip_size_kb": 64, 00:07:03.584 "state": "online", 00:07:03.584 "raid_level": "concat", 00:07:03.584 "superblock": true, 00:07:03.584 "num_base_bdevs": 2, 00:07:03.584 "num_base_bdevs_discovered": 2, 00:07:03.584 "num_base_bdevs_operational": 2, 00:07:03.584 "base_bdevs_list": [ 00:07:03.584 { 00:07:03.584 "name": "BaseBdev1", 00:07:03.584 "uuid": "e328af36-cc4e-5e64-a0c9-a0fb861c1fba", 00:07:03.584 "is_configured": true, 00:07:03.584 "data_offset": 2048, 00:07:03.584 "data_size": 63488 00:07:03.584 }, 00:07:03.584 { 00:07:03.584 "name": "BaseBdev2", 00:07:03.584 "uuid": "3c53c184-35f0-5f07-bc1c-40b371f83f7d", 00:07:03.584 "is_configured": true, 00:07:03.584 "data_offset": 2048, 00:07:03.584 "data_size": 63488 00:07:03.584 } 00:07:03.584 ] 00:07:03.584 }' 00:07:03.584 18:38:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:03.584 18:38:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.154 18:38:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:04.154 18:38:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:04.154 [2024-12-15 18:38:04.501327] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:07:05.097 18:38:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:07:05.097 18:38:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.097 18:38:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.097 18:38:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.097 18:38:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:05.097 18:38:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:07:05.097 18:38:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:05.097 18:38:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:05.097 18:38:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:05.097 18:38:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:05.097 18:38:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:05.097 18:38:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:05.097 18:38:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:05.097 18:38:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:05.097 18:38:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:05.097 18:38:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:05.097 18:38:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:05.097 18:38:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:05.097 18:38:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:05.097 18:38:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.097 18:38:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.097 18:38:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.097 18:38:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:05.097 "name": "raid_bdev1", 00:07:05.097 "uuid": "50b6fbeb-6d8d-476e-886f-7211568455d9", 00:07:05.097 "strip_size_kb": 64, 00:07:05.097 "state": "online", 00:07:05.097 "raid_level": "concat", 00:07:05.097 "superblock": true, 00:07:05.097 "num_base_bdevs": 2, 00:07:05.097 "num_base_bdevs_discovered": 2, 00:07:05.097 "num_base_bdevs_operational": 2, 00:07:05.097 "base_bdevs_list": [ 00:07:05.097 { 00:07:05.097 "name": "BaseBdev1", 00:07:05.097 "uuid": "e328af36-cc4e-5e64-a0c9-a0fb861c1fba", 00:07:05.097 "is_configured": true, 00:07:05.097 "data_offset": 2048, 00:07:05.097 "data_size": 63488 00:07:05.097 }, 00:07:05.097 { 00:07:05.097 "name": "BaseBdev2", 00:07:05.097 "uuid": "3c53c184-35f0-5f07-bc1c-40b371f83f7d", 00:07:05.097 "is_configured": true, 00:07:05.097 "data_offset": 2048, 00:07:05.097 "data_size": 63488 00:07:05.097 } 00:07:05.097 ] 00:07:05.097 }' 00:07:05.097 18:38:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:05.097 18:38:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.682 18:38:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:05.682 18:38:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.682 18:38:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.682 [2024-12-15 18:38:05.905674] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:05.682 [2024-12-15 18:38:05.905834] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:05.682 [2024-12-15 18:38:05.908409] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:05.682 [2024-12-15 18:38:05.908498] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:05.682 [2024-12-15 18:38:05.908557] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:05.682 [2024-12-15 18:38:05.908598] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:07:05.682 { 00:07:05.682 "results": [ 00:07:05.682 { 00:07:05.682 "job": "raid_bdev1", 00:07:05.682 "core_mask": "0x1", 00:07:05.682 "workload": "randrw", 00:07:05.682 "percentage": 50, 00:07:05.682 "status": "finished", 00:07:05.682 "queue_depth": 1, 00:07:05.682 "io_size": 131072, 00:07:05.682 "runtime": 1.405286, 00:07:05.682 "iops": 14870.994231779154, 00:07:05.682 "mibps": 1858.8742789723942, 00:07:05.682 "io_failed": 1, 00:07:05.682 "io_timeout": 0, 00:07:05.682 "avg_latency_us": 93.88854885557927, 00:07:05.682 "min_latency_us": 24.929257641921396, 00:07:05.682 "max_latency_us": 1352.216593886463 00:07:05.682 } 00:07:05.682 ], 00:07:05.682 "core_count": 1 00:07:05.682 } 00:07:05.682 18:38:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.682 18:38:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75760 00:07:05.682 18:38:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 75760 ']' 00:07:05.682 18:38:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 75760 00:07:05.682 18:38:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:07:05.682 18:38:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:05.682 18:38:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75760 00:07:05.682 18:38:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:05.682 18:38:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:05.682 18:38:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75760' 00:07:05.682 killing process with pid 75760 00:07:05.682 18:38:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 75760 00:07:05.682 [2024-12-15 18:38:05.955266] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:05.682 18:38:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 75760 00:07:05.682 [2024-12-15 18:38:05.983867] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:05.941 18:38:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.K93vqt3Qei 00:07:05.941 18:38:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:05.941 18:38:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:05.942 18:38:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:07:05.942 18:38:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:07:05.942 18:38:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:05.942 18:38:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:05.942 18:38:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:07:05.942 00:07:05.942 real 0m3.420s 00:07:05.942 user 0m4.242s 00:07:05.942 sys 0m0.610s 00:07:05.942 18:38:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:05.942 18:38:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.942 ************************************ 00:07:05.942 END TEST raid_write_error_test 00:07:05.942 ************************************ 00:07:05.942 18:38:06 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:05.942 18:38:06 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:07:05.942 18:38:06 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:05.942 18:38:06 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:05.942 18:38:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:06.202 ************************************ 00:07:06.202 START TEST raid_state_function_test 00:07:06.202 ************************************ 00:07:06.202 18:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 false 00:07:06.202 18:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:07:06.202 18:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:06.202 18:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:06.202 18:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:06.202 18:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:06.202 18:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:06.202 18:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:06.202 18:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:06.202 18:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:06.202 18:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:06.202 18:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:06.202 18:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:06.202 18:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:06.202 18:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:06.202 18:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:06.202 18:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:06.202 18:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:06.202 18:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:06.202 18:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:07:06.202 18:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:07:06.202 18:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:06.202 18:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:06.202 18:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=75898 00:07:06.202 18:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:06.202 18:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 75898' 00:07:06.202 Process raid pid: 75898 00:07:06.202 18:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 75898 00:07:06.202 18:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 75898 ']' 00:07:06.202 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:06.202 18:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:06.202 18:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:06.202 18:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:06.202 18:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:06.202 18:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:06.202 [2024-12-15 18:38:06.482363] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:07:06.202 [2024-12-15 18:38:06.482488] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:06.202 [2024-12-15 18:38:06.635897] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.461 [2024-12-15 18:38:06.674670] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.461 [2024-12-15 18:38:06.750224] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:06.461 [2024-12-15 18:38:06.750264] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:07.032 18:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:07.032 18:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:07:07.032 18:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:07.032 18:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.032 18:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.032 [2024-12-15 18:38:07.315746] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:07.032 [2024-12-15 18:38:07.315834] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:07.032 [2024-12-15 18:38:07.315846] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:07.032 [2024-12-15 18:38:07.315857] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:07.032 18:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.032 18:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:07.032 18:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:07.032 18:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:07.032 18:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:07.032 18:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:07.032 18:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:07.032 18:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:07.032 18:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:07.032 18:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:07.032 18:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:07.032 18:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:07.032 18:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:07.032 18:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.032 18:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.032 18:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.032 18:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:07.032 "name": "Existed_Raid", 00:07:07.032 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:07.032 "strip_size_kb": 0, 00:07:07.032 "state": "configuring", 00:07:07.032 "raid_level": "raid1", 00:07:07.032 "superblock": false, 00:07:07.032 "num_base_bdevs": 2, 00:07:07.032 "num_base_bdevs_discovered": 0, 00:07:07.032 "num_base_bdevs_operational": 2, 00:07:07.032 "base_bdevs_list": [ 00:07:07.032 { 00:07:07.032 "name": "BaseBdev1", 00:07:07.032 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:07.032 "is_configured": false, 00:07:07.032 "data_offset": 0, 00:07:07.032 "data_size": 0 00:07:07.032 }, 00:07:07.032 { 00:07:07.032 "name": "BaseBdev2", 00:07:07.032 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:07.032 "is_configured": false, 00:07:07.032 "data_offset": 0, 00:07:07.032 "data_size": 0 00:07:07.032 } 00:07:07.032 ] 00:07:07.032 }' 00:07:07.032 18:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:07.032 18:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.602 18:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:07.602 18:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.602 18:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.602 [2024-12-15 18:38:07.778881] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:07.602 [2024-12-15 18:38:07.779034] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:07:07.602 18:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.602 18:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:07.602 18:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.602 18:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.602 [2024-12-15 18:38:07.786860] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:07.602 [2024-12-15 18:38:07.786947] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:07.602 [2024-12-15 18:38:07.786974] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:07.602 [2024-12-15 18:38:07.786998] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:07.602 18:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.602 18:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:07.602 18:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.602 18:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.602 [2024-12-15 18:38:07.810302] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:07.602 BaseBdev1 00:07:07.602 18:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.602 18:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:07.602 18:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:07.602 18:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:07.602 18:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:07.602 18:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:07.602 18:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:07.602 18:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:07.602 18:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.602 18:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.602 18:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.602 18:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:07.602 18:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.602 18:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.602 [ 00:07:07.602 { 00:07:07.602 "name": "BaseBdev1", 00:07:07.602 "aliases": [ 00:07:07.602 "9293ed0f-1aaf-4f61-8dfc-5207444bb91f" 00:07:07.602 ], 00:07:07.602 "product_name": "Malloc disk", 00:07:07.602 "block_size": 512, 00:07:07.602 "num_blocks": 65536, 00:07:07.602 "uuid": "9293ed0f-1aaf-4f61-8dfc-5207444bb91f", 00:07:07.602 "assigned_rate_limits": { 00:07:07.602 "rw_ios_per_sec": 0, 00:07:07.602 "rw_mbytes_per_sec": 0, 00:07:07.602 "r_mbytes_per_sec": 0, 00:07:07.602 "w_mbytes_per_sec": 0 00:07:07.602 }, 00:07:07.602 "claimed": true, 00:07:07.602 "claim_type": "exclusive_write", 00:07:07.602 "zoned": false, 00:07:07.602 "supported_io_types": { 00:07:07.602 "read": true, 00:07:07.602 "write": true, 00:07:07.602 "unmap": true, 00:07:07.602 "flush": true, 00:07:07.603 "reset": true, 00:07:07.603 "nvme_admin": false, 00:07:07.603 "nvme_io": false, 00:07:07.603 "nvme_io_md": false, 00:07:07.603 "write_zeroes": true, 00:07:07.603 "zcopy": true, 00:07:07.603 "get_zone_info": false, 00:07:07.603 "zone_management": false, 00:07:07.603 "zone_append": false, 00:07:07.603 "compare": false, 00:07:07.603 "compare_and_write": false, 00:07:07.603 "abort": true, 00:07:07.603 "seek_hole": false, 00:07:07.603 "seek_data": false, 00:07:07.603 "copy": true, 00:07:07.603 "nvme_iov_md": false 00:07:07.603 }, 00:07:07.603 "memory_domains": [ 00:07:07.603 { 00:07:07.603 "dma_device_id": "system", 00:07:07.603 "dma_device_type": 1 00:07:07.603 }, 00:07:07.603 { 00:07:07.603 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:07.603 "dma_device_type": 2 00:07:07.603 } 00:07:07.603 ], 00:07:07.603 "driver_specific": {} 00:07:07.603 } 00:07:07.603 ] 00:07:07.603 18:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.603 18:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:07.603 18:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:07.603 18:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:07.603 18:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:07.603 18:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:07.603 18:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:07.603 18:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:07.603 18:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:07.603 18:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:07.603 18:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:07.603 18:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:07.603 18:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:07.603 18:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.603 18:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.603 18:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:07.603 18:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.603 18:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:07.603 "name": "Existed_Raid", 00:07:07.603 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:07.603 "strip_size_kb": 0, 00:07:07.603 "state": "configuring", 00:07:07.603 "raid_level": "raid1", 00:07:07.603 "superblock": false, 00:07:07.603 "num_base_bdevs": 2, 00:07:07.603 "num_base_bdevs_discovered": 1, 00:07:07.603 "num_base_bdevs_operational": 2, 00:07:07.603 "base_bdevs_list": [ 00:07:07.603 { 00:07:07.603 "name": "BaseBdev1", 00:07:07.603 "uuid": "9293ed0f-1aaf-4f61-8dfc-5207444bb91f", 00:07:07.603 "is_configured": true, 00:07:07.603 "data_offset": 0, 00:07:07.603 "data_size": 65536 00:07:07.603 }, 00:07:07.603 { 00:07:07.603 "name": "BaseBdev2", 00:07:07.603 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:07.603 "is_configured": false, 00:07:07.603 "data_offset": 0, 00:07:07.603 "data_size": 0 00:07:07.603 } 00:07:07.603 ] 00:07:07.603 }' 00:07:07.603 18:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:07.603 18:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.863 18:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:07.863 18:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.863 18:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.863 [2024-12-15 18:38:08.277547] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:07.863 [2024-12-15 18:38:08.277688] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:07:07.863 18:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.863 18:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:07.863 18:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.863 18:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.863 [2024-12-15 18:38:08.289530] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:07.863 [2024-12-15 18:38:08.291710] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:07.863 [2024-12-15 18:38:08.292040] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:07.863 18:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.863 18:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:07.863 18:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:07.863 18:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:07.863 18:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:07.863 18:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:07.863 18:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:07.863 18:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:07.863 18:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:07.863 18:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:07.863 18:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:07.863 18:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:07.863 18:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:07.863 18:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:07.863 18:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:07.863 18:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.863 18:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.123 18:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.123 18:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:08.123 "name": "Existed_Raid", 00:07:08.123 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:08.123 "strip_size_kb": 0, 00:07:08.123 "state": "configuring", 00:07:08.123 "raid_level": "raid1", 00:07:08.123 "superblock": false, 00:07:08.123 "num_base_bdevs": 2, 00:07:08.123 "num_base_bdevs_discovered": 1, 00:07:08.123 "num_base_bdevs_operational": 2, 00:07:08.123 "base_bdevs_list": [ 00:07:08.123 { 00:07:08.123 "name": "BaseBdev1", 00:07:08.123 "uuid": "9293ed0f-1aaf-4f61-8dfc-5207444bb91f", 00:07:08.124 "is_configured": true, 00:07:08.124 "data_offset": 0, 00:07:08.124 "data_size": 65536 00:07:08.124 }, 00:07:08.124 { 00:07:08.124 "name": "BaseBdev2", 00:07:08.124 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:08.124 "is_configured": false, 00:07:08.124 "data_offset": 0, 00:07:08.124 "data_size": 0 00:07:08.124 } 00:07:08.124 ] 00:07:08.124 }' 00:07:08.124 18:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:08.124 18:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.384 18:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:08.384 18:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.384 18:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.384 [2024-12-15 18:38:08.721776] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:08.384 [2024-12-15 18:38:08.721935] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:07:08.384 [2024-12-15 18:38:08.721971] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:07:08.384 [2024-12-15 18:38:08.722327] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:08.384 [2024-12-15 18:38:08.722548] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:07:08.384 [2024-12-15 18:38:08.722597] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:07:08.384 [2024-12-15 18:38:08.722892] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:08.384 BaseBdev2 00:07:08.384 18:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.384 18:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:08.384 18:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:08.384 18:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:08.384 18:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:08.384 18:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:08.384 18:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:08.384 18:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:08.384 18:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.384 18:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.384 18:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.384 18:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:08.384 18:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.384 18:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.384 [ 00:07:08.384 { 00:07:08.384 "name": "BaseBdev2", 00:07:08.384 "aliases": [ 00:07:08.384 "776ac65b-c0d0-46cb-8f85-f5bc4f90b88c" 00:07:08.384 ], 00:07:08.384 "product_name": "Malloc disk", 00:07:08.384 "block_size": 512, 00:07:08.384 "num_blocks": 65536, 00:07:08.384 "uuid": "776ac65b-c0d0-46cb-8f85-f5bc4f90b88c", 00:07:08.384 "assigned_rate_limits": { 00:07:08.384 "rw_ios_per_sec": 0, 00:07:08.384 "rw_mbytes_per_sec": 0, 00:07:08.384 "r_mbytes_per_sec": 0, 00:07:08.384 "w_mbytes_per_sec": 0 00:07:08.384 }, 00:07:08.384 "claimed": true, 00:07:08.384 "claim_type": "exclusive_write", 00:07:08.384 "zoned": false, 00:07:08.384 "supported_io_types": { 00:07:08.384 "read": true, 00:07:08.384 "write": true, 00:07:08.384 "unmap": true, 00:07:08.384 "flush": true, 00:07:08.384 "reset": true, 00:07:08.384 "nvme_admin": false, 00:07:08.384 "nvme_io": false, 00:07:08.384 "nvme_io_md": false, 00:07:08.384 "write_zeroes": true, 00:07:08.384 "zcopy": true, 00:07:08.384 "get_zone_info": false, 00:07:08.384 "zone_management": false, 00:07:08.384 "zone_append": false, 00:07:08.384 "compare": false, 00:07:08.384 "compare_and_write": false, 00:07:08.384 "abort": true, 00:07:08.384 "seek_hole": false, 00:07:08.384 "seek_data": false, 00:07:08.384 "copy": true, 00:07:08.384 "nvme_iov_md": false 00:07:08.384 }, 00:07:08.384 "memory_domains": [ 00:07:08.384 { 00:07:08.384 "dma_device_id": "system", 00:07:08.384 "dma_device_type": 1 00:07:08.384 }, 00:07:08.384 { 00:07:08.384 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:08.384 "dma_device_type": 2 00:07:08.384 } 00:07:08.384 ], 00:07:08.384 "driver_specific": {} 00:07:08.384 } 00:07:08.384 ] 00:07:08.384 18:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.384 18:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:08.384 18:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:08.384 18:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:08.384 18:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:07:08.384 18:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:08.384 18:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:08.384 18:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:08.384 18:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:08.384 18:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:08.384 18:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:08.384 18:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:08.384 18:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:08.384 18:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:08.384 18:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:08.384 18:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.384 18:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.384 18:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:08.384 18:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.384 18:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:08.384 "name": "Existed_Raid", 00:07:08.384 "uuid": "c703aec2-ce35-4e83-a9b9-5f9b078ecff5", 00:07:08.384 "strip_size_kb": 0, 00:07:08.384 "state": "online", 00:07:08.384 "raid_level": "raid1", 00:07:08.384 "superblock": false, 00:07:08.384 "num_base_bdevs": 2, 00:07:08.384 "num_base_bdevs_discovered": 2, 00:07:08.384 "num_base_bdevs_operational": 2, 00:07:08.384 "base_bdevs_list": [ 00:07:08.384 { 00:07:08.384 "name": "BaseBdev1", 00:07:08.384 "uuid": "9293ed0f-1aaf-4f61-8dfc-5207444bb91f", 00:07:08.384 "is_configured": true, 00:07:08.384 "data_offset": 0, 00:07:08.384 "data_size": 65536 00:07:08.384 }, 00:07:08.384 { 00:07:08.384 "name": "BaseBdev2", 00:07:08.384 "uuid": "776ac65b-c0d0-46cb-8f85-f5bc4f90b88c", 00:07:08.384 "is_configured": true, 00:07:08.384 "data_offset": 0, 00:07:08.384 "data_size": 65536 00:07:08.384 } 00:07:08.384 ] 00:07:08.384 }' 00:07:08.384 18:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:08.384 18:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.954 18:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:08.954 18:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:08.954 18:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:08.954 18:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:08.954 18:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:08.954 18:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:08.954 18:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:08.954 18:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:08.954 18:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.954 18:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.954 [2024-12-15 18:38:09.189360] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:08.954 18:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.954 18:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:08.954 "name": "Existed_Raid", 00:07:08.954 "aliases": [ 00:07:08.954 "c703aec2-ce35-4e83-a9b9-5f9b078ecff5" 00:07:08.954 ], 00:07:08.954 "product_name": "Raid Volume", 00:07:08.954 "block_size": 512, 00:07:08.954 "num_blocks": 65536, 00:07:08.954 "uuid": "c703aec2-ce35-4e83-a9b9-5f9b078ecff5", 00:07:08.954 "assigned_rate_limits": { 00:07:08.954 "rw_ios_per_sec": 0, 00:07:08.954 "rw_mbytes_per_sec": 0, 00:07:08.954 "r_mbytes_per_sec": 0, 00:07:08.954 "w_mbytes_per_sec": 0 00:07:08.954 }, 00:07:08.954 "claimed": false, 00:07:08.954 "zoned": false, 00:07:08.954 "supported_io_types": { 00:07:08.954 "read": true, 00:07:08.954 "write": true, 00:07:08.954 "unmap": false, 00:07:08.954 "flush": false, 00:07:08.954 "reset": true, 00:07:08.954 "nvme_admin": false, 00:07:08.954 "nvme_io": false, 00:07:08.954 "nvme_io_md": false, 00:07:08.954 "write_zeroes": true, 00:07:08.954 "zcopy": false, 00:07:08.954 "get_zone_info": false, 00:07:08.954 "zone_management": false, 00:07:08.954 "zone_append": false, 00:07:08.954 "compare": false, 00:07:08.954 "compare_and_write": false, 00:07:08.954 "abort": false, 00:07:08.954 "seek_hole": false, 00:07:08.954 "seek_data": false, 00:07:08.954 "copy": false, 00:07:08.954 "nvme_iov_md": false 00:07:08.954 }, 00:07:08.954 "memory_domains": [ 00:07:08.954 { 00:07:08.954 "dma_device_id": "system", 00:07:08.954 "dma_device_type": 1 00:07:08.954 }, 00:07:08.954 { 00:07:08.954 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:08.954 "dma_device_type": 2 00:07:08.954 }, 00:07:08.954 { 00:07:08.954 "dma_device_id": "system", 00:07:08.954 "dma_device_type": 1 00:07:08.954 }, 00:07:08.954 { 00:07:08.954 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:08.954 "dma_device_type": 2 00:07:08.954 } 00:07:08.954 ], 00:07:08.954 "driver_specific": { 00:07:08.954 "raid": { 00:07:08.954 "uuid": "c703aec2-ce35-4e83-a9b9-5f9b078ecff5", 00:07:08.954 "strip_size_kb": 0, 00:07:08.954 "state": "online", 00:07:08.954 "raid_level": "raid1", 00:07:08.954 "superblock": false, 00:07:08.954 "num_base_bdevs": 2, 00:07:08.954 "num_base_bdevs_discovered": 2, 00:07:08.954 "num_base_bdevs_operational": 2, 00:07:08.954 "base_bdevs_list": [ 00:07:08.954 { 00:07:08.954 "name": "BaseBdev1", 00:07:08.954 "uuid": "9293ed0f-1aaf-4f61-8dfc-5207444bb91f", 00:07:08.954 "is_configured": true, 00:07:08.954 "data_offset": 0, 00:07:08.954 "data_size": 65536 00:07:08.954 }, 00:07:08.954 { 00:07:08.954 "name": "BaseBdev2", 00:07:08.954 "uuid": "776ac65b-c0d0-46cb-8f85-f5bc4f90b88c", 00:07:08.954 "is_configured": true, 00:07:08.954 "data_offset": 0, 00:07:08.954 "data_size": 65536 00:07:08.954 } 00:07:08.954 ] 00:07:08.954 } 00:07:08.954 } 00:07:08.954 }' 00:07:08.954 18:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:08.954 18:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:08.954 BaseBdev2' 00:07:08.954 18:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:08.954 18:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:08.954 18:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:08.954 18:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:08.954 18:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:08.954 18:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.954 18:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.954 18:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.954 18:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:08.954 18:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:08.954 18:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:08.954 18:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:08.954 18:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:08.954 18:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.954 18:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.954 18:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.954 18:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:08.954 18:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:08.954 18:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:08.954 18:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.954 18:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.954 [2024-12-15 18:38:09.380728] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:09.214 18:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.214 18:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:09.214 18:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:07:09.214 18:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:09.214 18:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:07:09.214 18:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:07:09.214 18:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:07:09.214 18:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:09.214 18:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:09.214 18:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:09.214 18:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:09.214 18:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:09.214 18:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:09.214 18:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:09.214 18:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:09.214 18:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:09.214 18:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:09.214 18:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.214 18:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:09.215 18:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.215 18:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.215 18:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:09.215 "name": "Existed_Raid", 00:07:09.215 "uuid": "c703aec2-ce35-4e83-a9b9-5f9b078ecff5", 00:07:09.215 "strip_size_kb": 0, 00:07:09.215 "state": "online", 00:07:09.215 "raid_level": "raid1", 00:07:09.215 "superblock": false, 00:07:09.215 "num_base_bdevs": 2, 00:07:09.215 "num_base_bdevs_discovered": 1, 00:07:09.215 "num_base_bdevs_operational": 1, 00:07:09.215 "base_bdevs_list": [ 00:07:09.215 { 00:07:09.215 "name": null, 00:07:09.215 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:09.215 "is_configured": false, 00:07:09.215 "data_offset": 0, 00:07:09.215 "data_size": 65536 00:07:09.215 }, 00:07:09.215 { 00:07:09.215 "name": "BaseBdev2", 00:07:09.215 "uuid": "776ac65b-c0d0-46cb-8f85-f5bc4f90b88c", 00:07:09.215 "is_configured": true, 00:07:09.215 "data_offset": 0, 00:07:09.215 "data_size": 65536 00:07:09.215 } 00:07:09.215 ] 00:07:09.215 }' 00:07:09.215 18:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:09.215 18:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.475 18:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:09.475 18:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:09.475 18:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:09.475 18:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:09.475 18:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.475 18:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.475 18:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.475 18:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:09.475 18:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:09.475 18:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:09.475 18:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.475 18:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.475 [2024-12-15 18:38:09.888963] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:09.475 [2024-12-15 18:38:09.889089] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:09.475 [2024-12-15 18:38:09.909717] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:09.475 [2024-12-15 18:38:09.909860] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:09.475 [2024-12-15 18:38:09.909903] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:07:09.475 18:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.475 18:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:09.475 18:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:09.734 18:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:09.735 18:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.735 18:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.735 18:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:09.735 18:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.735 18:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:09.735 18:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:09.735 18:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:09.735 18:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 75898 00:07:09.735 18:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 75898 ']' 00:07:09.735 18:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 75898 00:07:09.735 18:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:07:09.735 18:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:09.735 18:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75898 00:07:09.735 killing process with pid 75898 00:07:09.735 18:38:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:09.735 18:38:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:09.735 18:38:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75898' 00:07:09.735 18:38:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 75898 00:07:09.735 [2024-12-15 18:38:10.007286] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:09.735 18:38:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 75898 00:07:09.735 [2024-12-15 18:38:10.008832] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:09.995 18:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:07:09.995 00:07:09.995 real 0m3.940s 00:07:09.995 user 0m6.025s 00:07:09.995 sys 0m0.874s 00:07:09.995 18:38:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:09.995 18:38:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.995 ************************************ 00:07:09.995 END TEST raid_state_function_test 00:07:09.995 ************************************ 00:07:09.995 18:38:10 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:07:09.995 18:38:10 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:09.995 18:38:10 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:09.995 18:38:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:09.995 ************************************ 00:07:09.995 START TEST raid_state_function_test_sb 00:07:09.995 ************************************ 00:07:09.995 18:38:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:07:09.995 18:38:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:07:09.995 18:38:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:09.995 18:38:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:07:09.995 18:38:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:09.995 18:38:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:09.995 18:38:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:09.995 18:38:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:09.995 18:38:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:09.995 18:38:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:09.995 18:38:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:09.995 18:38:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:09.995 18:38:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:09.995 18:38:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:09.995 18:38:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:09.995 18:38:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:09.995 18:38:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:09.995 18:38:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:09.995 18:38:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:09.995 18:38:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:07:09.995 18:38:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:07:09.995 18:38:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:07:09.995 18:38:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:07:09.995 18:38:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=76130 00:07:09.995 18:38:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:09.995 18:38:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 76130' 00:07:09.995 Process raid pid: 76130 00:07:09.995 18:38:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 76130 00:07:09.995 18:38:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 76130 ']' 00:07:09.995 18:38:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:09.995 18:38:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:09.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:09.995 18:38:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:09.995 18:38:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:09.995 18:38:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:10.255 [2024-12-15 18:38:10.491676] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:07:10.255 [2024-12-15 18:38:10.491890] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:10.255 [2024-12-15 18:38:10.662962] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.514 [2024-12-15 18:38:10.701620] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.514 [2024-12-15 18:38:10.778148] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:10.514 [2024-12-15 18:38:10.778312] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:11.083 18:38:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:11.083 18:38:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:07:11.083 18:38:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:11.083 18:38:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.083 18:38:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:11.083 [2024-12-15 18:38:11.333420] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:11.083 [2024-12-15 18:38:11.333491] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:11.083 [2024-12-15 18:38:11.333509] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:11.083 [2024-12-15 18:38:11.333522] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:11.083 18:38:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.083 18:38:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:11.083 18:38:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:11.083 18:38:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:11.083 18:38:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:11.083 18:38:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:11.083 18:38:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:11.083 18:38:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:11.083 18:38:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:11.083 18:38:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:11.083 18:38:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:11.083 18:38:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:11.083 18:38:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:11.083 18:38:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.083 18:38:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:11.083 18:38:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.083 18:38:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:11.083 "name": "Existed_Raid", 00:07:11.083 "uuid": "307eb608-ef76-4459-88c2-5c041c799060", 00:07:11.083 "strip_size_kb": 0, 00:07:11.083 "state": "configuring", 00:07:11.083 "raid_level": "raid1", 00:07:11.083 "superblock": true, 00:07:11.083 "num_base_bdevs": 2, 00:07:11.083 "num_base_bdevs_discovered": 0, 00:07:11.083 "num_base_bdevs_operational": 2, 00:07:11.083 "base_bdevs_list": [ 00:07:11.083 { 00:07:11.083 "name": "BaseBdev1", 00:07:11.083 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:11.083 "is_configured": false, 00:07:11.083 "data_offset": 0, 00:07:11.083 "data_size": 0 00:07:11.083 }, 00:07:11.083 { 00:07:11.083 "name": "BaseBdev2", 00:07:11.083 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:11.083 "is_configured": false, 00:07:11.083 "data_offset": 0, 00:07:11.083 "data_size": 0 00:07:11.083 } 00:07:11.083 ] 00:07:11.083 }' 00:07:11.084 18:38:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:11.084 18:38:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:11.343 18:38:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:11.343 18:38:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.343 18:38:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:11.343 [2024-12-15 18:38:11.752691] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:11.343 [2024-12-15 18:38:11.752814] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:07:11.343 18:38:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.343 18:38:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:11.343 18:38:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.343 18:38:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:11.343 [2024-12-15 18:38:11.764651] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:11.343 [2024-12-15 18:38:11.764736] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:11.343 [2024-12-15 18:38:11.764767] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:11.343 [2024-12-15 18:38:11.764790] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:11.344 18:38:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.344 18:38:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:11.344 18:38:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.344 18:38:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:11.604 [2024-12-15 18:38:11.792010] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:11.604 BaseBdev1 00:07:11.604 18:38:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.604 18:38:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:11.604 18:38:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:11.604 18:38:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:11.604 18:38:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:11.604 18:38:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:11.604 18:38:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:11.604 18:38:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:11.604 18:38:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.604 18:38:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:11.604 18:38:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.604 18:38:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:11.604 18:38:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.604 18:38:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:11.604 [ 00:07:11.604 { 00:07:11.604 "name": "BaseBdev1", 00:07:11.604 "aliases": [ 00:07:11.604 "2f5e8d94-260f-4535-9b8a-437db730b4f2" 00:07:11.604 ], 00:07:11.604 "product_name": "Malloc disk", 00:07:11.604 "block_size": 512, 00:07:11.604 "num_blocks": 65536, 00:07:11.604 "uuid": "2f5e8d94-260f-4535-9b8a-437db730b4f2", 00:07:11.604 "assigned_rate_limits": { 00:07:11.604 "rw_ios_per_sec": 0, 00:07:11.604 "rw_mbytes_per_sec": 0, 00:07:11.604 "r_mbytes_per_sec": 0, 00:07:11.604 "w_mbytes_per_sec": 0 00:07:11.604 }, 00:07:11.604 "claimed": true, 00:07:11.604 "claim_type": "exclusive_write", 00:07:11.604 "zoned": false, 00:07:11.604 "supported_io_types": { 00:07:11.604 "read": true, 00:07:11.604 "write": true, 00:07:11.604 "unmap": true, 00:07:11.604 "flush": true, 00:07:11.604 "reset": true, 00:07:11.604 "nvme_admin": false, 00:07:11.604 "nvme_io": false, 00:07:11.604 "nvme_io_md": false, 00:07:11.604 "write_zeroes": true, 00:07:11.604 "zcopy": true, 00:07:11.604 "get_zone_info": false, 00:07:11.604 "zone_management": false, 00:07:11.604 "zone_append": false, 00:07:11.604 "compare": false, 00:07:11.604 "compare_and_write": false, 00:07:11.604 "abort": true, 00:07:11.604 "seek_hole": false, 00:07:11.604 "seek_data": false, 00:07:11.604 "copy": true, 00:07:11.604 "nvme_iov_md": false 00:07:11.604 }, 00:07:11.604 "memory_domains": [ 00:07:11.604 { 00:07:11.604 "dma_device_id": "system", 00:07:11.604 "dma_device_type": 1 00:07:11.604 }, 00:07:11.604 { 00:07:11.604 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:11.604 "dma_device_type": 2 00:07:11.604 } 00:07:11.604 ], 00:07:11.604 "driver_specific": {} 00:07:11.604 } 00:07:11.604 ] 00:07:11.604 18:38:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.604 18:38:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:11.604 18:38:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:11.604 18:38:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:11.605 18:38:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:11.605 18:38:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:11.605 18:38:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:11.605 18:38:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:11.605 18:38:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:11.605 18:38:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:11.605 18:38:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:11.605 18:38:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:11.605 18:38:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:11.605 18:38:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:11.605 18:38:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.605 18:38:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:11.605 18:38:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.605 18:38:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:11.605 "name": "Existed_Raid", 00:07:11.605 "uuid": "e17b012c-c090-4194-b982-077866bb31b1", 00:07:11.605 "strip_size_kb": 0, 00:07:11.605 "state": "configuring", 00:07:11.605 "raid_level": "raid1", 00:07:11.605 "superblock": true, 00:07:11.605 "num_base_bdevs": 2, 00:07:11.605 "num_base_bdevs_discovered": 1, 00:07:11.605 "num_base_bdevs_operational": 2, 00:07:11.605 "base_bdevs_list": [ 00:07:11.605 { 00:07:11.605 "name": "BaseBdev1", 00:07:11.605 "uuid": "2f5e8d94-260f-4535-9b8a-437db730b4f2", 00:07:11.605 "is_configured": true, 00:07:11.605 "data_offset": 2048, 00:07:11.605 "data_size": 63488 00:07:11.605 }, 00:07:11.605 { 00:07:11.605 "name": "BaseBdev2", 00:07:11.605 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:11.605 "is_configured": false, 00:07:11.605 "data_offset": 0, 00:07:11.605 "data_size": 0 00:07:11.605 } 00:07:11.605 ] 00:07:11.605 }' 00:07:11.605 18:38:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:11.605 18:38:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:12.175 18:38:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:12.175 18:38:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.175 18:38:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:12.175 [2024-12-15 18:38:12.319172] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:12.175 [2024-12-15 18:38:12.319332] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:07:12.175 18:38:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.175 18:38:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:12.175 18:38:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.175 18:38:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:12.175 [2024-12-15 18:38:12.331180] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:12.175 [2024-12-15 18:38:12.333395] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:12.175 [2024-12-15 18:38:12.333454] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:12.175 18:38:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.175 18:38:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:12.175 18:38:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:12.175 18:38:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:12.175 18:38:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:12.175 18:38:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:12.175 18:38:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:12.175 18:38:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:12.175 18:38:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:12.175 18:38:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:12.175 18:38:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:12.175 18:38:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:12.175 18:38:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:12.175 18:38:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:12.175 18:38:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:12.175 18:38:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.175 18:38:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:12.175 18:38:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.175 18:38:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:12.175 "name": "Existed_Raid", 00:07:12.175 "uuid": "6d2392df-e80e-4433-b6dc-2657b938b65d", 00:07:12.175 "strip_size_kb": 0, 00:07:12.175 "state": "configuring", 00:07:12.175 "raid_level": "raid1", 00:07:12.175 "superblock": true, 00:07:12.175 "num_base_bdevs": 2, 00:07:12.175 "num_base_bdevs_discovered": 1, 00:07:12.175 "num_base_bdevs_operational": 2, 00:07:12.175 "base_bdevs_list": [ 00:07:12.175 { 00:07:12.175 "name": "BaseBdev1", 00:07:12.175 "uuid": "2f5e8d94-260f-4535-9b8a-437db730b4f2", 00:07:12.175 "is_configured": true, 00:07:12.175 "data_offset": 2048, 00:07:12.175 "data_size": 63488 00:07:12.175 }, 00:07:12.175 { 00:07:12.175 "name": "BaseBdev2", 00:07:12.175 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:12.175 "is_configured": false, 00:07:12.175 "data_offset": 0, 00:07:12.175 "data_size": 0 00:07:12.175 } 00:07:12.175 ] 00:07:12.175 }' 00:07:12.175 18:38:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:12.175 18:38:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:12.435 18:38:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:12.436 18:38:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.436 18:38:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:12.436 [2024-12-15 18:38:12.799169] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:12.436 [2024-12-15 18:38:12.799500] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:07:12.436 [2024-12-15 18:38:12.799554] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:12.436 [2024-12-15 18:38:12.799903] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:12.436 [2024-12-15 18:38:12.800124] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:07:12.436 BaseBdev2 00:07:12.436 [2024-12-15 18:38:12.800180] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:07:12.436 [2024-12-15 18:38:12.800360] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:12.436 18:38:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.436 18:38:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:12.436 18:38:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:12.436 18:38:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:12.436 18:38:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:12.436 18:38:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:12.436 18:38:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:12.436 18:38:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:12.436 18:38:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.436 18:38:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:12.436 18:38:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.436 18:38:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:12.436 18:38:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.436 18:38:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:12.436 [ 00:07:12.436 { 00:07:12.436 "name": "BaseBdev2", 00:07:12.436 "aliases": [ 00:07:12.436 "1e09f370-79eb-458d-adc2-11737c845e80" 00:07:12.436 ], 00:07:12.436 "product_name": "Malloc disk", 00:07:12.436 "block_size": 512, 00:07:12.436 "num_blocks": 65536, 00:07:12.436 "uuid": "1e09f370-79eb-458d-adc2-11737c845e80", 00:07:12.436 "assigned_rate_limits": { 00:07:12.436 "rw_ios_per_sec": 0, 00:07:12.436 "rw_mbytes_per_sec": 0, 00:07:12.436 "r_mbytes_per_sec": 0, 00:07:12.436 "w_mbytes_per_sec": 0 00:07:12.436 }, 00:07:12.436 "claimed": true, 00:07:12.436 "claim_type": "exclusive_write", 00:07:12.436 "zoned": false, 00:07:12.436 "supported_io_types": { 00:07:12.436 "read": true, 00:07:12.436 "write": true, 00:07:12.436 "unmap": true, 00:07:12.436 "flush": true, 00:07:12.436 "reset": true, 00:07:12.436 "nvme_admin": false, 00:07:12.436 "nvme_io": false, 00:07:12.436 "nvme_io_md": false, 00:07:12.436 "write_zeroes": true, 00:07:12.436 "zcopy": true, 00:07:12.436 "get_zone_info": false, 00:07:12.436 "zone_management": false, 00:07:12.436 "zone_append": false, 00:07:12.436 "compare": false, 00:07:12.436 "compare_and_write": false, 00:07:12.436 "abort": true, 00:07:12.436 "seek_hole": false, 00:07:12.436 "seek_data": false, 00:07:12.436 "copy": true, 00:07:12.436 "nvme_iov_md": false 00:07:12.436 }, 00:07:12.436 "memory_domains": [ 00:07:12.436 { 00:07:12.436 "dma_device_id": "system", 00:07:12.436 "dma_device_type": 1 00:07:12.436 }, 00:07:12.436 { 00:07:12.436 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:12.436 "dma_device_type": 2 00:07:12.436 } 00:07:12.436 ], 00:07:12.436 "driver_specific": {} 00:07:12.436 } 00:07:12.436 ] 00:07:12.436 18:38:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.436 18:38:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:12.436 18:38:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:12.436 18:38:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:12.436 18:38:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:07:12.436 18:38:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:12.436 18:38:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:12.436 18:38:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:12.436 18:38:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:12.436 18:38:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:12.436 18:38:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:12.436 18:38:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:12.436 18:38:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:12.436 18:38:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:12.436 18:38:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:12.436 18:38:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:12.436 18:38:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.436 18:38:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:12.436 18:38:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.696 18:38:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:12.696 "name": "Existed_Raid", 00:07:12.696 "uuid": "6d2392df-e80e-4433-b6dc-2657b938b65d", 00:07:12.696 "strip_size_kb": 0, 00:07:12.696 "state": "online", 00:07:12.696 "raid_level": "raid1", 00:07:12.696 "superblock": true, 00:07:12.696 "num_base_bdevs": 2, 00:07:12.696 "num_base_bdevs_discovered": 2, 00:07:12.696 "num_base_bdevs_operational": 2, 00:07:12.696 "base_bdevs_list": [ 00:07:12.696 { 00:07:12.696 "name": "BaseBdev1", 00:07:12.696 "uuid": "2f5e8d94-260f-4535-9b8a-437db730b4f2", 00:07:12.696 "is_configured": true, 00:07:12.696 "data_offset": 2048, 00:07:12.696 "data_size": 63488 00:07:12.696 }, 00:07:12.696 { 00:07:12.696 "name": "BaseBdev2", 00:07:12.696 "uuid": "1e09f370-79eb-458d-adc2-11737c845e80", 00:07:12.696 "is_configured": true, 00:07:12.696 "data_offset": 2048, 00:07:12.696 "data_size": 63488 00:07:12.696 } 00:07:12.696 ] 00:07:12.696 }' 00:07:12.696 18:38:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:12.696 18:38:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:12.956 18:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:12.956 18:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:12.956 18:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:12.956 18:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:12.956 18:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:12.956 18:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:12.956 18:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:12.957 18:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:12.957 18:38:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.957 18:38:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:12.957 [2024-12-15 18:38:13.290707] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:12.957 18:38:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.957 18:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:12.957 "name": "Existed_Raid", 00:07:12.957 "aliases": [ 00:07:12.957 "6d2392df-e80e-4433-b6dc-2657b938b65d" 00:07:12.957 ], 00:07:12.957 "product_name": "Raid Volume", 00:07:12.957 "block_size": 512, 00:07:12.957 "num_blocks": 63488, 00:07:12.957 "uuid": "6d2392df-e80e-4433-b6dc-2657b938b65d", 00:07:12.957 "assigned_rate_limits": { 00:07:12.957 "rw_ios_per_sec": 0, 00:07:12.957 "rw_mbytes_per_sec": 0, 00:07:12.957 "r_mbytes_per_sec": 0, 00:07:12.957 "w_mbytes_per_sec": 0 00:07:12.957 }, 00:07:12.957 "claimed": false, 00:07:12.957 "zoned": false, 00:07:12.957 "supported_io_types": { 00:07:12.957 "read": true, 00:07:12.957 "write": true, 00:07:12.957 "unmap": false, 00:07:12.957 "flush": false, 00:07:12.957 "reset": true, 00:07:12.957 "nvme_admin": false, 00:07:12.957 "nvme_io": false, 00:07:12.957 "nvme_io_md": false, 00:07:12.957 "write_zeroes": true, 00:07:12.957 "zcopy": false, 00:07:12.957 "get_zone_info": false, 00:07:12.957 "zone_management": false, 00:07:12.957 "zone_append": false, 00:07:12.957 "compare": false, 00:07:12.957 "compare_and_write": false, 00:07:12.957 "abort": false, 00:07:12.957 "seek_hole": false, 00:07:12.957 "seek_data": false, 00:07:12.957 "copy": false, 00:07:12.957 "nvme_iov_md": false 00:07:12.957 }, 00:07:12.957 "memory_domains": [ 00:07:12.957 { 00:07:12.957 "dma_device_id": "system", 00:07:12.957 "dma_device_type": 1 00:07:12.957 }, 00:07:12.957 { 00:07:12.957 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:12.957 "dma_device_type": 2 00:07:12.957 }, 00:07:12.957 { 00:07:12.957 "dma_device_id": "system", 00:07:12.957 "dma_device_type": 1 00:07:12.957 }, 00:07:12.957 { 00:07:12.957 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:12.957 "dma_device_type": 2 00:07:12.957 } 00:07:12.957 ], 00:07:12.957 "driver_specific": { 00:07:12.957 "raid": { 00:07:12.957 "uuid": "6d2392df-e80e-4433-b6dc-2657b938b65d", 00:07:12.957 "strip_size_kb": 0, 00:07:12.957 "state": "online", 00:07:12.957 "raid_level": "raid1", 00:07:12.957 "superblock": true, 00:07:12.957 "num_base_bdevs": 2, 00:07:12.957 "num_base_bdevs_discovered": 2, 00:07:12.957 "num_base_bdevs_operational": 2, 00:07:12.957 "base_bdevs_list": [ 00:07:12.957 { 00:07:12.957 "name": "BaseBdev1", 00:07:12.957 "uuid": "2f5e8d94-260f-4535-9b8a-437db730b4f2", 00:07:12.957 "is_configured": true, 00:07:12.957 "data_offset": 2048, 00:07:12.957 "data_size": 63488 00:07:12.957 }, 00:07:12.957 { 00:07:12.957 "name": "BaseBdev2", 00:07:12.957 "uuid": "1e09f370-79eb-458d-adc2-11737c845e80", 00:07:12.957 "is_configured": true, 00:07:12.957 "data_offset": 2048, 00:07:12.957 "data_size": 63488 00:07:12.957 } 00:07:12.957 ] 00:07:12.957 } 00:07:12.957 } 00:07:12.957 }' 00:07:12.957 18:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:12.957 18:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:12.957 BaseBdev2' 00:07:12.957 18:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:13.217 18:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:13.217 18:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:13.217 18:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:13.217 18:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:13.217 18:38:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.217 18:38:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:13.217 18:38:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.217 18:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:13.217 18:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:13.217 18:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:13.217 18:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:13.217 18:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:13.217 18:38:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.217 18:38:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:13.217 18:38:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.217 18:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:13.217 18:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:13.217 18:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:13.217 18:38:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.217 18:38:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:13.217 [2024-12-15 18:38:13.514034] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:13.217 18:38:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.217 18:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:13.217 18:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:07:13.217 18:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:13.217 18:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:07:13.217 18:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:07:13.217 18:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:07:13.217 18:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:13.217 18:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:13.217 18:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:13.217 18:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:13.217 18:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:13.217 18:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:13.217 18:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:13.217 18:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:13.217 18:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:13.217 18:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:13.217 18:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:13.217 18:38:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.217 18:38:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:13.217 18:38:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.217 18:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:13.217 "name": "Existed_Raid", 00:07:13.217 "uuid": "6d2392df-e80e-4433-b6dc-2657b938b65d", 00:07:13.217 "strip_size_kb": 0, 00:07:13.217 "state": "online", 00:07:13.217 "raid_level": "raid1", 00:07:13.217 "superblock": true, 00:07:13.217 "num_base_bdevs": 2, 00:07:13.217 "num_base_bdevs_discovered": 1, 00:07:13.217 "num_base_bdevs_operational": 1, 00:07:13.217 "base_bdevs_list": [ 00:07:13.217 { 00:07:13.217 "name": null, 00:07:13.218 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:13.218 "is_configured": false, 00:07:13.218 "data_offset": 0, 00:07:13.218 "data_size": 63488 00:07:13.218 }, 00:07:13.218 { 00:07:13.218 "name": "BaseBdev2", 00:07:13.218 "uuid": "1e09f370-79eb-458d-adc2-11737c845e80", 00:07:13.218 "is_configured": true, 00:07:13.218 "data_offset": 2048, 00:07:13.218 "data_size": 63488 00:07:13.218 } 00:07:13.218 ] 00:07:13.218 }' 00:07:13.218 18:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:13.218 18:38:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:13.787 18:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:13.787 18:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:13.787 18:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:13.787 18:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:13.787 18:38:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.787 18:38:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:13.787 18:38:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.787 18:38:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:13.787 18:38:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:13.787 18:38:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:13.787 18:38:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.787 18:38:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:13.787 [2024-12-15 18:38:14.041152] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:13.787 [2024-12-15 18:38:14.041355] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:13.787 [2024-12-15 18:38:14.062134] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:13.787 [2024-12-15 18:38:14.062240] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:13.787 [2024-12-15 18:38:14.062284] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:07:13.787 18:38:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.787 18:38:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:13.787 18:38:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:13.787 18:38:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:13.787 18:38:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.787 18:38:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:13.787 18:38:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:13.787 18:38:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.787 18:38:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:13.788 18:38:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:13.788 18:38:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:13.788 18:38:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 76130 00:07:13.788 18:38:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 76130 ']' 00:07:13.788 18:38:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 76130 00:07:13.788 18:38:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:07:13.788 18:38:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:13.788 18:38:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76130 00:07:13.788 killing process with pid 76130 00:07:13.788 18:38:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:13.788 18:38:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:13.788 18:38:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76130' 00:07:13.788 18:38:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 76130 00:07:13.788 [2024-12-15 18:38:14.162712] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:13.788 18:38:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 76130 00:07:13.788 [2024-12-15 18:38:14.164279] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:14.047 18:38:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:07:14.047 00:07:14.047 real 0m4.091s 00:07:14.047 user 0m6.289s 00:07:14.047 sys 0m0.925s 00:07:14.307 ************************************ 00:07:14.307 END TEST raid_state_function_test_sb 00:07:14.307 ************************************ 00:07:14.307 18:38:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:14.307 18:38:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:14.307 18:38:14 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:07:14.307 18:38:14 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:14.307 18:38:14 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:14.307 18:38:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:14.307 ************************************ 00:07:14.307 START TEST raid_superblock_test 00:07:14.307 ************************************ 00:07:14.307 18:38:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:07:14.307 18:38:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:07:14.307 18:38:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:07:14.307 18:38:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:07:14.307 18:38:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:07:14.307 18:38:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:07:14.307 18:38:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:07:14.307 18:38:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:07:14.307 18:38:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:07:14.307 18:38:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:07:14.307 18:38:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:07:14.307 18:38:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:07:14.307 18:38:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:07:14.307 18:38:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:07:14.307 18:38:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:07:14.307 18:38:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:07:14.307 18:38:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=76370 00:07:14.307 18:38:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:07:14.307 18:38:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 76370 00:07:14.307 18:38:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 76370 ']' 00:07:14.307 18:38:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:14.307 18:38:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:14.307 18:38:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:14.307 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:14.307 18:38:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:14.307 18:38:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.307 [2024-12-15 18:38:14.646736] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:07:14.307 [2024-12-15 18:38:14.646966] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76370 ] 00:07:14.567 [2024-12-15 18:38:14.815886] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.567 [2024-12-15 18:38:14.853740] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.567 [2024-12-15 18:38:14.929424] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:14.567 [2024-12-15 18:38:14.929468] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:15.147 18:38:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:15.147 18:38:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:07:15.147 18:38:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:07:15.147 18:38:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:15.147 18:38:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:07:15.147 18:38:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:07:15.147 18:38:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:15.147 18:38:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:15.147 18:38:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:15.147 18:38:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:15.147 18:38:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:07:15.147 18:38:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.147 18:38:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.147 malloc1 00:07:15.147 18:38:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.148 18:38:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:15.148 18:38:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.148 18:38:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.148 [2024-12-15 18:38:15.482251] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:15.148 [2024-12-15 18:38:15.482426] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:15.148 [2024-12-15 18:38:15.482481] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:15.148 [2024-12-15 18:38:15.482534] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:15.148 [2024-12-15 18:38:15.485141] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:15.148 [2024-12-15 18:38:15.485219] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:15.148 pt1 00:07:15.148 18:38:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.148 18:38:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:15.148 18:38:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:15.148 18:38:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:07:15.148 18:38:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:07:15.148 18:38:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:15.148 18:38:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:15.148 18:38:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:15.148 18:38:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:15.148 18:38:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:07:15.148 18:38:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.148 18:38:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.148 malloc2 00:07:15.148 18:38:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.148 18:38:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:15.148 18:38:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.148 18:38:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.148 [2024-12-15 18:38:15.520662] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:15.148 [2024-12-15 18:38:15.520717] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:15.148 [2024-12-15 18:38:15.520734] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:07:15.148 [2024-12-15 18:38:15.520744] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:15.148 [2024-12-15 18:38:15.523035] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:15.148 [2024-12-15 18:38:15.523122] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:15.148 pt2 00:07:15.148 18:38:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.148 18:38:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:15.148 18:38:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:15.148 18:38:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:07:15.148 18:38:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.148 18:38:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.148 [2024-12-15 18:38:15.532698] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:15.148 [2024-12-15 18:38:15.534761] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:15.148 [2024-12-15 18:38:15.534956] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:07:15.148 [2024-12-15 18:38:15.534975] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:15.148 [2024-12-15 18:38:15.535261] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:15.148 [2024-12-15 18:38:15.535395] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:07:15.148 [2024-12-15 18:38:15.535410] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:07:15.148 [2024-12-15 18:38:15.535533] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:15.148 18:38:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.148 18:38:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:07:15.148 18:38:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:15.148 18:38:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:15.148 18:38:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:15.148 18:38:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:15.148 18:38:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:15.148 18:38:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:15.148 18:38:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:15.148 18:38:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:15.148 18:38:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:15.148 18:38:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:15.148 18:38:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:15.148 18:38:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.148 18:38:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.148 18:38:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.415 18:38:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:15.415 "name": "raid_bdev1", 00:07:15.415 "uuid": "5d823cf0-3b52-416b-8add-4d16d0f3c1b8", 00:07:15.415 "strip_size_kb": 0, 00:07:15.415 "state": "online", 00:07:15.415 "raid_level": "raid1", 00:07:15.415 "superblock": true, 00:07:15.415 "num_base_bdevs": 2, 00:07:15.415 "num_base_bdevs_discovered": 2, 00:07:15.415 "num_base_bdevs_operational": 2, 00:07:15.415 "base_bdevs_list": [ 00:07:15.415 { 00:07:15.415 "name": "pt1", 00:07:15.415 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:15.415 "is_configured": true, 00:07:15.415 "data_offset": 2048, 00:07:15.415 "data_size": 63488 00:07:15.415 }, 00:07:15.415 { 00:07:15.415 "name": "pt2", 00:07:15.415 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:15.415 "is_configured": true, 00:07:15.415 "data_offset": 2048, 00:07:15.415 "data_size": 63488 00:07:15.415 } 00:07:15.415 ] 00:07:15.415 }' 00:07:15.415 18:38:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:15.415 18:38:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.688 18:38:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:07:15.688 18:38:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:15.688 18:38:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:15.688 18:38:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:15.688 18:38:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:15.688 18:38:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:15.688 18:38:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:15.688 18:38:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:15.688 18:38:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.688 18:38:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.688 [2024-12-15 18:38:16.020296] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:15.688 18:38:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.688 18:38:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:15.688 "name": "raid_bdev1", 00:07:15.688 "aliases": [ 00:07:15.688 "5d823cf0-3b52-416b-8add-4d16d0f3c1b8" 00:07:15.688 ], 00:07:15.688 "product_name": "Raid Volume", 00:07:15.688 "block_size": 512, 00:07:15.688 "num_blocks": 63488, 00:07:15.688 "uuid": "5d823cf0-3b52-416b-8add-4d16d0f3c1b8", 00:07:15.688 "assigned_rate_limits": { 00:07:15.688 "rw_ios_per_sec": 0, 00:07:15.688 "rw_mbytes_per_sec": 0, 00:07:15.688 "r_mbytes_per_sec": 0, 00:07:15.688 "w_mbytes_per_sec": 0 00:07:15.688 }, 00:07:15.688 "claimed": false, 00:07:15.688 "zoned": false, 00:07:15.688 "supported_io_types": { 00:07:15.688 "read": true, 00:07:15.688 "write": true, 00:07:15.688 "unmap": false, 00:07:15.688 "flush": false, 00:07:15.688 "reset": true, 00:07:15.688 "nvme_admin": false, 00:07:15.688 "nvme_io": false, 00:07:15.688 "nvme_io_md": false, 00:07:15.688 "write_zeroes": true, 00:07:15.688 "zcopy": false, 00:07:15.688 "get_zone_info": false, 00:07:15.688 "zone_management": false, 00:07:15.688 "zone_append": false, 00:07:15.688 "compare": false, 00:07:15.688 "compare_and_write": false, 00:07:15.688 "abort": false, 00:07:15.688 "seek_hole": false, 00:07:15.688 "seek_data": false, 00:07:15.689 "copy": false, 00:07:15.689 "nvme_iov_md": false 00:07:15.689 }, 00:07:15.689 "memory_domains": [ 00:07:15.689 { 00:07:15.689 "dma_device_id": "system", 00:07:15.689 "dma_device_type": 1 00:07:15.689 }, 00:07:15.689 { 00:07:15.689 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:15.689 "dma_device_type": 2 00:07:15.689 }, 00:07:15.689 { 00:07:15.689 "dma_device_id": "system", 00:07:15.689 "dma_device_type": 1 00:07:15.689 }, 00:07:15.689 { 00:07:15.689 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:15.689 "dma_device_type": 2 00:07:15.689 } 00:07:15.689 ], 00:07:15.689 "driver_specific": { 00:07:15.689 "raid": { 00:07:15.689 "uuid": "5d823cf0-3b52-416b-8add-4d16d0f3c1b8", 00:07:15.689 "strip_size_kb": 0, 00:07:15.689 "state": "online", 00:07:15.689 "raid_level": "raid1", 00:07:15.689 "superblock": true, 00:07:15.689 "num_base_bdevs": 2, 00:07:15.689 "num_base_bdevs_discovered": 2, 00:07:15.689 "num_base_bdevs_operational": 2, 00:07:15.689 "base_bdevs_list": [ 00:07:15.689 { 00:07:15.689 "name": "pt1", 00:07:15.689 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:15.689 "is_configured": true, 00:07:15.689 "data_offset": 2048, 00:07:15.689 "data_size": 63488 00:07:15.689 }, 00:07:15.689 { 00:07:15.689 "name": "pt2", 00:07:15.689 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:15.689 "is_configured": true, 00:07:15.689 "data_offset": 2048, 00:07:15.689 "data_size": 63488 00:07:15.689 } 00:07:15.689 ] 00:07:15.689 } 00:07:15.689 } 00:07:15.689 }' 00:07:15.689 18:38:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:15.689 18:38:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:15.689 pt2' 00:07:15.689 18:38:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:15.960 18:38:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:15.960 18:38:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:15.960 18:38:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:15.960 18:38:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:15.960 18:38:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.960 18:38:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.960 18:38:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.960 18:38:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:15.960 18:38:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:15.960 18:38:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:15.960 18:38:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:15.960 18:38:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.960 18:38:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.960 18:38:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:15.960 18:38:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.960 18:38:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:15.960 18:38:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:15.960 18:38:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:07:15.960 18:38:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:15.960 18:38:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.960 18:38:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.960 [2024-12-15 18:38:16.235759] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:15.960 18:38:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.960 18:38:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=5d823cf0-3b52-416b-8add-4d16d0f3c1b8 00:07:15.960 18:38:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 5d823cf0-3b52-416b-8add-4d16d0f3c1b8 ']' 00:07:15.960 18:38:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:15.960 18:38:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.960 18:38:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.960 [2024-12-15 18:38:16.279424] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:15.960 [2024-12-15 18:38:16.279494] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:15.960 [2024-12-15 18:38:16.279622] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:15.960 [2024-12-15 18:38:16.279722] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:15.960 [2024-12-15 18:38:16.279775] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:07:15.960 18:38:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.960 18:38:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:07:15.960 18:38:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:15.960 18:38:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.960 18:38:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.961 18:38:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.961 18:38:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:07:15.961 18:38:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:07:15.961 18:38:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:15.961 18:38:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:07:15.961 18:38:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.961 18:38:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.961 18:38:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.961 18:38:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:15.961 18:38:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:07:15.961 18:38:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.961 18:38:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.961 18:38:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.961 18:38:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:07:15.961 18:38:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.961 18:38:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.961 18:38:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:15.961 18:38:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.221 18:38:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:07:16.221 18:38:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:16.221 18:38:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:07:16.221 18:38:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:16.221 18:38:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:16.221 18:38:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:16.221 18:38:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:16.221 18:38:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:16.221 18:38:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:16.221 18:38:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.221 18:38:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.221 [2024-12-15 18:38:16.415224] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:16.221 [2024-12-15 18:38:16.417439] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:16.221 [2024-12-15 18:38:16.417506] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:16.221 [2024-12-15 18:38:16.417563] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:16.221 [2024-12-15 18:38:16.417592] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:16.221 [2024-12-15 18:38:16.417602] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:07:16.221 request: 00:07:16.221 { 00:07:16.221 "name": "raid_bdev1", 00:07:16.221 "raid_level": "raid1", 00:07:16.221 "base_bdevs": [ 00:07:16.221 "malloc1", 00:07:16.221 "malloc2" 00:07:16.221 ], 00:07:16.221 "superblock": false, 00:07:16.221 "method": "bdev_raid_create", 00:07:16.221 "req_id": 1 00:07:16.221 } 00:07:16.221 Got JSON-RPC error response 00:07:16.221 response: 00:07:16.221 { 00:07:16.221 "code": -17, 00:07:16.221 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:16.221 } 00:07:16.221 18:38:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:16.221 18:38:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:07:16.221 18:38:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:16.221 18:38:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:16.221 18:38:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:16.221 18:38:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:16.221 18:38:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:07:16.221 18:38:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.221 18:38:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.221 18:38:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.221 18:38:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:07:16.221 18:38:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:07:16.221 18:38:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:16.221 18:38:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.221 18:38:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.221 [2024-12-15 18:38:16.483047] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:16.221 [2024-12-15 18:38:16.483099] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:16.221 [2024-12-15 18:38:16.483118] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:07:16.221 [2024-12-15 18:38:16.483127] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:16.221 [2024-12-15 18:38:16.485536] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:16.221 [2024-12-15 18:38:16.485614] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:16.221 [2024-12-15 18:38:16.485694] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:16.221 [2024-12-15 18:38:16.485729] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:16.221 pt1 00:07:16.221 18:38:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.221 18:38:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:07:16.221 18:38:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:16.221 18:38:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:16.221 18:38:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:16.221 18:38:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:16.221 18:38:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:16.221 18:38:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:16.221 18:38:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:16.221 18:38:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:16.221 18:38:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:16.221 18:38:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:16.221 18:38:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:16.221 18:38:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.221 18:38:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.221 18:38:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.221 18:38:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:16.221 "name": "raid_bdev1", 00:07:16.221 "uuid": "5d823cf0-3b52-416b-8add-4d16d0f3c1b8", 00:07:16.221 "strip_size_kb": 0, 00:07:16.221 "state": "configuring", 00:07:16.221 "raid_level": "raid1", 00:07:16.221 "superblock": true, 00:07:16.221 "num_base_bdevs": 2, 00:07:16.221 "num_base_bdevs_discovered": 1, 00:07:16.221 "num_base_bdevs_operational": 2, 00:07:16.221 "base_bdevs_list": [ 00:07:16.221 { 00:07:16.221 "name": "pt1", 00:07:16.221 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:16.221 "is_configured": true, 00:07:16.221 "data_offset": 2048, 00:07:16.221 "data_size": 63488 00:07:16.221 }, 00:07:16.221 { 00:07:16.221 "name": null, 00:07:16.221 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:16.221 "is_configured": false, 00:07:16.221 "data_offset": 2048, 00:07:16.222 "data_size": 63488 00:07:16.222 } 00:07:16.222 ] 00:07:16.222 }' 00:07:16.222 18:38:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:16.222 18:38:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.481 18:38:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:07:16.481 18:38:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:07:16.481 18:38:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:16.741 18:38:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:16.741 18:38:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.741 18:38:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.741 [2024-12-15 18:38:16.926370] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:16.741 [2024-12-15 18:38:16.926519] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:16.741 [2024-12-15 18:38:16.926564] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:07:16.741 [2024-12-15 18:38:16.926601] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:16.741 [2024-12-15 18:38:16.927143] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:16.741 [2024-12-15 18:38:16.927201] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:16.741 [2024-12-15 18:38:16.927321] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:16.741 [2024-12-15 18:38:16.927376] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:16.741 [2024-12-15 18:38:16.927505] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:07:16.741 [2024-12-15 18:38:16.927539] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:16.741 [2024-12-15 18:38:16.927810] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:07:16.741 [2024-12-15 18:38:16.927977] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:07:16.741 [2024-12-15 18:38:16.928024] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:07:16.741 [2024-12-15 18:38:16.928169] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:16.741 pt2 00:07:16.741 18:38:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.741 18:38:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:16.741 18:38:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:16.741 18:38:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:07:16.741 18:38:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:16.741 18:38:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:16.741 18:38:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:16.741 18:38:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:16.741 18:38:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:16.741 18:38:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:16.741 18:38:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:16.741 18:38:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:16.741 18:38:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:16.741 18:38:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:16.741 18:38:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:16.741 18:38:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.741 18:38:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.741 18:38:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.741 18:38:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:16.741 "name": "raid_bdev1", 00:07:16.741 "uuid": "5d823cf0-3b52-416b-8add-4d16d0f3c1b8", 00:07:16.741 "strip_size_kb": 0, 00:07:16.741 "state": "online", 00:07:16.741 "raid_level": "raid1", 00:07:16.741 "superblock": true, 00:07:16.741 "num_base_bdevs": 2, 00:07:16.741 "num_base_bdevs_discovered": 2, 00:07:16.741 "num_base_bdevs_operational": 2, 00:07:16.741 "base_bdevs_list": [ 00:07:16.741 { 00:07:16.741 "name": "pt1", 00:07:16.741 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:16.741 "is_configured": true, 00:07:16.741 "data_offset": 2048, 00:07:16.741 "data_size": 63488 00:07:16.741 }, 00:07:16.741 { 00:07:16.741 "name": "pt2", 00:07:16.741 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:16.741 "is_configured": true, 00:07:16.741 "data_offset": 2048, 00:07:16.741 "data_size": 63488 00:07:16.741 } 00:07:16.741 ] 00:07:16.741 }' 00:07:16.741 18:38:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:16.741 18:38:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.002 18:38:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:07:17.002 18:38:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:17.002 18:38:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:17.002 18:38:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:17.002 18:38:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:17.002 18:38:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:17.002 18:38:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:17.002 18:38:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:17.002 18:38:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.002 18:38:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.002 [2024-12-15 18:38:17.397776] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:17.002 18:38:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.002 18:38:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:17.002 "name": "raid_bdev1", 00:07:17.002 "aliases": [ 00:07:17.002 "5d823cf0-3b52-416b-8add-4d16d0f3c1b8" 00:07:17.002 ], 00:07:17.002 "product_name": "Raid Volume", 00:07:17.002 "block_size": 512, 00:07:17.002 "num_blocks": 63488, 00:07:17.002 "uuid": "5d823cf0-3b52-416b-8add-4d16d0f3c1b8", 00:07:17.002 "assigned_rate_limits": { 00:07:17.002 "rw_ios_per_sec": 0, 00:07:17.002 "rw_mbytes_per_sec": 0, 00:07:17.002 "r_mbytes_per_sec": 0, 00:07:17.002 "w_mbytes_per_sec": 0 00:07:17.002 }, 00:07:17.002 "claimed": false, 00:07:17.002 "zoned": false, 00:07:17.002 "supported_io_types": { 00:07:17.002 "read": true, 00:07:17.002 "write": true, 00:07:17.002 "unmap": false, 00:07:17.002 "flush": false, 00:07:17.002 "reset": true, 00:07:17.002 "nvme_admin": false, 00:07:17.002 "nvme_io": false, 00:07:17.002 "nvme_io_md": false, 00:07:17.002 "write_zeroes": true, 00:07:17.002 "zcopy": false, 00:07:17.002 "get_zone_info": false, 00:07:17.002 "zone_management": false, 00:07:17.002 "zone_append": false, 00:07:17.002 "compare": false, 00:07:17.002 "compare_and_write": false, 00:07:17.002 "abort": false, 00:07:17.002 "seek_hole": false, 00:07:17.002 "seek_data": false, 00:07:17.002 "copy": false, 00:07:17.002 "nvme_iov_md": false 00:07:17.002 }, 00:07:17.002 "memory_domains": [ 00:07:17.002 { 00:07:17.002 "dma_device_id": "system", 00:07:17.002 "dma_device_type": 1 00:07:17.002 }, 00:07:17.002 { 00:07:17.002 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:17.002 "dma_device_type": 2 00:07:17.002 }, 00:07:17.002 { 00:07:17.002 "dma_device_id": "system", 00:07:17.002 "dma_device_type": 1 00:07:17.002 }, 00:07:17.002 { 00:07:17.002 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:17.002 "dma_device_type": 2 00:07:17.002 } 00:07:17.002 ], 00:07:17.002 "driver_specific": { 00:07:17.002 "raid": { 00:07:17.002 "uuid": "5d823cf0-3b52-416b-8add-4d16d0f3c1b8", 00:07:17.002 "strip_size_kb": 0, 00:07:17.002 "state": "online", 00:07:17.002 "raid_level": "raid1", 00:07:17.002 "superblock": true, 00:07:17.002 "num_base_bdevs": 2, 00:07:17.002 "num_base_bdevs_discovered": 2, 00:07:17.002 "num_base_bdevs_operational": 2, 00:07:17.002 "base_bdevs_list": [ 00:07:17.002 { 00:07:17.002 "name": "pt1", 00:07:17.002 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:17.002 "is_configured": true, 00:07:17.002 "data_offset": 2048, 00:07:17.002 "data_size": 63488 00:07:17.002 }, 00:07:17.002 { 00:07:17.002 "name": "pt2", 00:07:17.002 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:17.002 "is_configured": true, 00:07:17.002 "data_offset": 2048, 00:07:17.002 "data_size": 63488 00:07:17.002 } 00:07:17.002 ] 00:07:17.002 } 00:07:17.002 } 00:07:17.002 }' 00:07:17.002 18:38:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:17.262 18:38:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:17.262 pt2' 00:07:17.262 18:38:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:17.262 18:38:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:17.262 18:38:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:17.262 18:38:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:17.262 18:38:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:17.262 18:38:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.262 18:38:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.262 18:38:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.262 18:38:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:17.262 18:38:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:17.262 18:38:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:17.262 18:38:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:17.262 18:38:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:17.262 18:38:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.262 18:38:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.262 18:38:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.262 18:38:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:17.262 18:38:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:17.262 18:38:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:07:17.262 18:38:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:17.262 18:38:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.262 18:38:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.262 [2024-12-15 18:38:17.629332] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:17.262 18:38:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.262 18:38:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 5d823cf0-3b52-416b-8add-4d16d0f3c1b8 '!=' 5d823cf0-3b52-416b-8add-4d16d0f3c1b8 ']' 00:07:17.262 18:38:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:07:17.262 18:38:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:17.262 18:38:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:07:17.262 18:38:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:07:17.262 18:38:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.262 18:38:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.262 [2024-12-15 18:38:17.657076] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:07:17.262 18:38:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.262 18:38:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:07:17.262 18:38:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:17.262 18:38:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:17.262 18:38:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:17.262 18:38:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:17.262 18:38:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:17.262 18:38:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:17.263 18:38:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:17.263 18:38:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:17.263 18:38:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:17.263 18:38:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:17.263 18:38:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:17.263 18:38:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.263 18:38:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.263 18:38:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.522 18:38:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:17.522 "name": "raid_bdev1", 00:07:17.522 "uuid": "5d823cf0-3b52-416b-8add-4d16d0f3c1b8", 00:07:17.522 "strip_size_kb": 0, 00:07:17.522 "state": "online", 00:07:17.522 "raid_level": "raid1", 00:07:17.523 "superblock": true, 00:07:17.523 "num_base_bdevs": 2, 00:07:17.523 "num_base_bdevs_discovered": 1, 00:07:17.523 "num_base_bdevs_operational": 1, 00:07:17.523 "base_bdevs_list": [ 00:07:17.523 { 00:07:17.523 "name": null, 00:07:17.523 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:17.523 "is_configured": false, 00:07:17.523 "data_offset": 0, 00:07:17.523 "data_size": 63488 00:07:17.523 }, 00:07:17.523 { 00:07:17.523 "name": "pt2", 00:07:17.523 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:17.523 "is_configured": true, 00:07:17.523 "data_offset": 2048, 00:07:17.523 "data_size": 63488 00:07:17.523 } 00:07:17.523 ] 00:07:17.523 }' 00:07:17.523 18:38:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:17.523 18:38:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.783 18:38:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:17.783 18:38:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.783 18:38:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.783 [2024-12-15 18:38:18.144356] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:17.783 [2024-12-15 18:38:18.144460] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:17.783 [2024-12-15 18:38:18.144585] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:17.783 [2024-12-15 18:38:18.144676] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:17.783 [2024-12-15 18:38:18.144735] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:07:17.783 18:38:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.783 18:38:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:17.783 18:38:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:07:17.783 18:38:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.783 18:38:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.783 18:38:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.783 18:38:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:07:17.783 18:38:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:07:17.783 18:38:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:07:17.783 18:38:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:07:17.783 18:38:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:07:17.783 18:38:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.783 18:38:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.783 18:38:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.783 18:38:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:07:17.783 18:38:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:07:17.783 18:38:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:07:17.783 18:38:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:07:17.783 18:38:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=1 00:07:17.783 18:38:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:17.783 18:38:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.783 18:38:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.783 [2024-12-15 18:38:18.216365] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:17.783 [2024-12-15 18:38:18.216484] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:17.783 [2024-12-15 18:38:18.216520] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:07:17.783 [2024-12-15 18:38:18.216554] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:17.783 [2024-12-15 18:38:18.219165] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:17.783 [2024-12-15 18:38:18.219238] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:17.783 [2024-12-15 18:38:18.219349] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:17.783 [2024-12-15 18:38:18.219416] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:17.783 [2024-12-15 18:38:18.219544] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:07:17.783 [2024-12-15 18:38:18.219579] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:17.783 [2024-12-15 18:38:18.219858] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:17.783 [2024-12-15 18:38:18.220034] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:07:17.783 [2024-12-15 18:38:18.220078] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:07:17.783 [2024-12-15 18:38:18.220253] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:17.783 pt2 00:07:17.783 18:38:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.783 18:38:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:07:18.043 18:38:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:18.043 18:38:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:18.043 18:38:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:18.043 18:38:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:18.043 18:38:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:18.043 18:38:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:18.043 18:38:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:18.043 18:38:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:18.043 18:38:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:18.043 18:38:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:18.043 18:38:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:18.043 18:38:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.043 18:38:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.043 18:38:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.043 18:38:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:18.043 "name": "raid_bdev1", 00:07:18.043 "uuid": "5d823cf0-3b52-416b-8add-4d16d0f3c1b8", 00:07:18.043 "strip_size_kb": 0, 00:07:18.043 "state": "online", 00:07:18.043 "raid_level": "raid1", 00:07:18.043 "superblock": true, 00:07:18.043 "num_base_bdevs": 2, 00:07:18.043 "num_base_bdevs_discovered": 1, 00:07:18.043 "num_base_bdevs_operational": 1, 00:07:18.043 "base_bdevs_list": [ 00:07:18.043 { 00:07:18.043 "name": null, 00:07:18.043 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:18.043 "is_configured": false, 00:07:18.043 "data_offset": 2048, 00:07:18.043 "data_size": 63488 00:07:18.043 }, 00:07:18.043 { 00:07:18.043 "name": "pt2", 00:07:18.043 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:18.043 "is_configured": true, 00:07:18.043 "data_offset": 2048, 00:07:18.043 "data_size": 63488 00:07:18.043 } 00:07:18.043 ] 00:07:18.043 }' 00:07:18.043 18:38:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:18.043 18:38:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.304 18:38:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:18.304 18:38:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.304 18:38:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.304 [2024-12-15 18:38:18.635736] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:18.304 [2024-12-15 18:38:18.635851] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:18.304 [2024-12-15 18:38:18.635942] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:18.304 [2024-12-15 18:38:18.635996] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:18.304 [2024-12-15 18:38:18.636009] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:07:18.304 18:38:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.304 18:38:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:07:18.304 18:38:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:18.304 18:38:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.304 18:38:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.304 18:38:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.304 18:38:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:07:18.304 18:38:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:07:18.304 18:38:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:07:18.304 18:38:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:18.304 18:38:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.304 18:38:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.304 [2024-12-15 18:38:18.699583] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:18.304 [2024-12-15 18:38:18.699657] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:18.304 [2024-12-15 18:38:18.699674] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:07:18.304 [2024-12-15 18:38:18.699691] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:18.304 [2024-12-15 18:38:18.702215] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:18.304 [2024-12-15 18:38:18.702310] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:18.304 [2024-12-15 18:38:18.702402] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:18.304 [2024-12-15 18:38:18.702454] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:18.304 [2024-12-15 18:38:18.702570] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:07:18.304 [2024-12-15 18:38:18.702584] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:18.304 [2024-12-15 18:38:18.702602] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state configuring 00:07:18.304 [2024-12-15 18:38:18.702641] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:18.304 [2024-12-15 18:38:18.702717] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:07:18.304 [2024-12-15 18:38:18.702727] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:18.304 [2024-12-15 18:38:18.702968] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:07:18.304 [2024-12-15 18:38:18.703100] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:07:18.304 [2024-12-15 18:38:18.703109] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:07:18.304 [2024-12-15 18:38:18.703231] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:18.304 pt1 00:07:18.304 18:38:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.304 18:38:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:07:18.304 18:38:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:07:18.304 18:38:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:18.304 18:38:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:18.304 18:38:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:18.304 18:38:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:18.304 18:38:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:18.304 18:38:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:18.304 18:38:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:18.304 18:38:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:18.304 18:38:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:18.304 18:38:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:18.304 18:38:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.304 18:38:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.304 18:38:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:18.304 18:38:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.564 18:38:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:18.564 "name": "raid_bdev1", 00:07:18.564 "uuid": "5d823cf0-3b52-416b-8add-4d16d0f3c1b8", 00:07:18.564 "strip_size_kb": 0, 00:07:18.564 "state": "online", 00:07:18.564 "raid_level": "raid1", 00:07:18.564 "superblock": true, 00:07:18.564 "num_base_bdevs": 2, 00:07:18.564 "num_base_bdevs_discovered": 1, 00:07:18.564 "num_base_bdevs_operational": 1, 00:07:18.564 "base_bdevs_list": [ 00:07:18.564 { 00:07:18.564 "name": null, 00:07:18.564 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:18.564 "is_configured": false, 00:07:18.564 "data_offset": 2048, 00:07:18.564 "data_size": 63488 00:07:18.564 }, 00:07:18.564 { 00:07:18.564 "name": "pt2", 00:07:18.564 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:18.564 "is_configured": true, 00:07:18.564 "data_offset": 2048, 00:07:18.564 "data_size": 63488 00:07:18.564 } 00:07:18.564 ] 00:07:18.564 }' 00:07:18.564 18:38:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:18.564 18:38:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.824 18:38:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:07:18.824 18:38:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:07:18.824 18:38:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.824 18:38:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.824 18:38:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.824 18:38:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:07:18.824 18:38:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:07:18.824 18:38:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:18.824 18:38:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.824 18:38:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.824 [2024-12-15 18:38:19.187069] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:18.824 18:38:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.824 18:38:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 5d823cf0-3b52-416b-8add-4d16d0f3c1b8 '!=' 5d823cf0-3b52-416b-8add-4d16d0f3c1b8 ']' 00:07:18.824 18:38:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 76370 00:07:18.824 18:38:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 76370 ']' 00:07:18.824 18:38:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 76370 00:07:18.824 18:38:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:07:18.824 18:38:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:18.824 18:38:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76370 00:07:18.824 18:38:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:18.824 killing process with pid 76370 00:07:18.824 18:38:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:18.824 18:38:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76370' 00:07:18.824 18:38:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 76370 00:07:18.824 [2024-12-15 18:38:19.257353] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:18.824 [2024-12-15 18:38:19.257450] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:18.824 18:38:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 76370 00:07:18.824 [2024-12-15 18:38:19.257509] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:18.824 [2024-12-15 18:38:19.257520] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:07:19.084 [2024-12-15 18:38:19.299341] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:19.344 18:38:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:07:19.344 00:07:19.344 real 0m5.064s 00:07:19.344 user 0m8.126s 00:07:19.344 sys 0m1.111s 00:07:19.344 18:38:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:19.344 18:38:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.344 ************************************ 00:07:19.344 END TEST raid_superblock_test 00:07:19.344 ************************************ 00:07:19.344 18:38:19 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:07:19.344 18:38:19 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:19.344 18:38:19 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:19.344 18:38:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:19.344 ************************************ 00:07:19.344 START TEST raid_read_error_test 00:07:19.344 ************************************ 00:07:19.344 18:38:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 read 00:07:19.344 18:38:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:07:19.344 18:38:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:19.344 18:38:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:07:19.344 18:38:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:19.344 18:38:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:19.344 18:38:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:19.344 18:38:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:19.344 18:38:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:19.344 18:38:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:19.344 18:38:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:19.344 18:38:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:19.344 18:38:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:19.344 18:38:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:19.344 18:38:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:19.344 18:38:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:19.344 18:38:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:19.344 18:38:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:19.344 18:38:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:19.344 18:38:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:07:19.344 18:38:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:07:19.344 18:38:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:19.344 18:38:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.wNgMkBd3r7 00:07:19.344 18:38:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=76689 00:07:19.344 18:38:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:19.344 18:38:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 76689 00:07:19.344 18:38:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 76689 ']' 00:07:19.344 18:38:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:19.344 18:38:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:19.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:19.344 18:38:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:19.344 18:38:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:19.345 18:38:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.604 [2024-12-15 18:38:19.795950] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:07:19.604 [2024-12-15 18:38:19.796157] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76689 ] 00:07:19.604 [2024-12-15 18:38:19.964436] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.604 [2024-12-15 18:38:20.004942] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.864 [2024-12-15 18:38:20.080690] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:19.864 [2024-12-15 18:38:20.080729] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:20.434 18:38:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:20.434 18:38:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:07:20.434 18:38:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:20.434 18:38:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:20.434 18:38:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.434 18:38:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.434 BaseBdev1_malloc 00:07:20.434 18:38:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.434 18:38:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:20.434 18:38:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.434 18:38:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.434 true 00:07:20.434 18:38:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.434 18:38:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:20.434 18:38:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.435 18:38:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.435 [2024-12-15 18:38:20.665653] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:20.435 [2024-12-15 18:38:20.665725] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:20.435 [2024-12-15 18:38:20.665751] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:20.435 [2024-12-15 18:38:20.665760] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:20.435 [2024-12-15 18:38:20.668137] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:20.435 [2024-12-15 18:38:20.668173] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:20.435 BaseBdev1 00:07:20.435 18:38:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.435 18:38:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:20.435 18:38:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:20.435 18:38:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.435 18:38:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.435 BaseBdev2_malloc 00:07:20.435 18:38:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.435 18:38:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:20.435 18:38:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.435 18:38:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.435 true 00:07:20.435 18:38:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.435 18:38:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:20.435 18:38:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.435 18:38:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.435 [2024-12-15 18:38:20.712454] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:20.435 [2024-12-15 18:38:20.712514] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:20.435 [2024-12-15 18:38:20.712537] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:20.435 [2024-12-15 18:38:20.712545] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:20.435 [2024-12-15 18:38:20.715013] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:20.435 [2024-12-15 18:38:20.715049] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:20.435 BaseBdev2 00:07:20.435 18:38:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.435 18:38:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:20.435 18:38:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.435 18:38:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.435 [2024-12-15 18:38:20.724512] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:20.435 [2024-12-15 18:38:20.726831] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:20.435 [2024-12-15 18:38:20.727019] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:07:20.435 [2024-12-15 18:38:20.727032] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:20.435 [2024-12-15 18:38:20.727314] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:07:20.435 [2024-12-15 18:38:20.727478] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:07:20.435 [2024-12-15 18:38:20.727491] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:07:20.435 [2024-12-15 18:38:20.727622] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:20.435 18:38:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.435 18:38:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:07:20.435 18:38:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:20.435 18:38:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:20.435 18:38:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:20.435 18:38:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:20.435 18:38:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:20.435 18:38:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:20.435 18:38:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:20.435 18:38:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:20.435 18:38:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:20.435 18:38:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:20.435 18:38:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:20.435 18:38:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.435 18:38:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.435 18:38:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.435 18:38:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:20.435 "name": "raid_bdev1", 00:07:20.435 "uuid": "3a9b1379-a524-477b-8daf-85829655cbe6", 00:07:20.435 "strip_size_kb": 0, 00:07:20.435 "state": "online", 00:07:20.435 "raid_level": "raid1", 00:07:20.435 "superblock": true, 00:07:20.435 "num_base_bdevs": 2, 00:07:20.435 "num_base_bdevs_discovered": 2, 00:07:20.435 "num_base_bdevs_operational": 2, 00:07:20.435 "base_bdevs_list": [ 00:07:20.435 { 00:07:20.435 "name": "BaseBdev1", 00:07:20.435 "uuid": "20c7f100-fcb2-57cb-883b-0f9ccb2d0be9", 00:07:20.435 "is_configured": true, 00:07:20.435 "data_offset": 2048, 00:07:20.435 "data_size": 63488 00:07:20.435 }, 00:07:20.435 { 00:07:20.435 "name": "BaseBdev2", 00:07:20.435 "uuid": "b6cff3c7-d93c-5205-824d-df8055ff8516", 00:07:20.435 "is_configured": true, 00:07:20.435 "data_offset": 2048, 00:07:20.435 "data_size": 63488 00:07:20.435 } 00:07:20.435 ] 00:07:20.435 }' 00:07:20.435 18:38:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:20.435 18:38:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.695 18:38:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:20.695 18:38:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:20.955 [2024-12-15 18:38:21.220149] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:07:21.895 18:38:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:07:21.895 18:38:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.895 18:38:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.895 18:38:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.895 18:38:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:21.895 18:38:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:07:21.895 18:38:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:07:21.895 18:38:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:21.895 18:38:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:07:21.895 18:38:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:21.895 18:38:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:21.895 18:38:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:21.895 18:38:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:21.895 18:38:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:21.895 18:38:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:21.895 18:38:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:21.895 18:38:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:21.895 18:38:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:21.895 18:38:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:21.895 18:38:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:21.895 18:38:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.895 18:38:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.895 18:38:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.895 18:38:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:21.895 "name": "raid_bdev1", 00:07:21.895 "uuid": "3a9b1379-a524-477b-8daf-85829655cbe6", 00:07:21.895 "strip_size_kb": 0, 00:07:21.895 "state": "online", 00:07:21.895 "raid_level": "raid1", 00:07:21.895 "superblock": true, 00:07:21.895 "num_base_bdevs": 2, 00:07:21.895 "num_base_bdevs_discovered": 2, 00:07:21.895 "num_base_bdevs_operational": 2, 00:07:21.895 "base_bdevs_list": [ 00:07:21.895 { 00:07:21.895 "name": "BaseBdev1", 00:07:21.895 "uuid": "20c7f100-fcb2-57cb-883b-0f9ccb2d0be9", 00:07:21.895 "is_configured": true, 00:07:21.895 "data_offset": 2048, 00:07:21.895 "data_size": 63488 00:07:21.895 }, 00:07:21.895 { 00:07:21.895 "name": "BaseBdev2", 00:07:21.895 "uuid": "b6cff3c7-d93c-5205-824d-df8055ff8516", 00:07:21.895 "is_configured": true, 00:07:21.895 "data_offset": 2048, 00:07:21.895 "data_size": 63488 00:07:21.895 } 00:07:21.895 ] 00:07:21.895 }' 00:07:21.895 18:38:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:21.895 18:38:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.155 18:38:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:22.155 18:38:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.155 18:38:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.155 [2024-12-15 18:38:22.569355] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:22.155 [2024-12-15 18:38:22.569426] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:22.155 [2024-12-15 18:38:22.572124] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:22.155 [2024-12-15 18:38:22.572287] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:22.155 [2024-12-15 18:38:22.572393] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:22.155 [2024-12-15 18:38:22.572405] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:07:22.155 { 00:07:22.155 "results": [ 00:07:22.155 { 00:07:22.155 "job": "raid_bdev1", 00:07:22.155 "core_mask": "0x1", 00:07:22.155 "workload": "randrw", 00:07:22.155 "percentage": 50, 00:07:22.155 "status": "finished", 00:07:22.155 "queue_depth": 1, 00:07:22.155 "io_size": 131072, 00:07:22.155 "runtime": 1.349907, 00:07:22.155 "iops": 15892.946699291137, 00:07:22.155 "mibps": 1986.618337411392, 00:07:22.155 "io_failed": 0, 00:07:22.155 "io_timeout": 0, 00:07:22.155 "avg_latency_us": 60.40906320133296, 00:07:22.155 "min_latency_us": 23.14061135371179, 00:07:22.155 "max_latency_us": 1337.907423580786 00:07:22.155 } 00:07:22.155 ], 00:07:22.155 "core_count": 1 00:07:22.155 } 00:07:22.155 18:38:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.155 18:38:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 76689 00:07:22.155 18:38:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 76689 ']' 00:07:22.155 18:38:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 76689 00:07:22.155 18:38:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:07:22.155 18:38:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:22.155 18:38:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76689 00:07:22.414 killing process with pid 76689 00:07:22.414 18:38:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:22.414 18:38:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:22.414 18:38:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76689' 00:07:22.414 18:38:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 76689 00:07:22.414 [2024-12-15 18:38:22.618198] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:22.414 18:38:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 76689 00:07:22.414 [2024-12-15 18:38:22.647522] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:22.674 18:38:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:22.674 18:38:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.wNgMkBd3r7 00:07:22.674 18:38:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:22.674 ************************************ 00:07:22.674 END TEST raid_read_error_test 00:07:22.674 ************************************ 00:07:22.674 18:38:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:07:22.674 18:38:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:07:22.674 18:38:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:22.674 18:38:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:07:22.674 18:38:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:07:22.674 00:07:22.674 real 0m3.283s 00:07:22.674 user 0m4.049s 00:07:22.674 sys 0m0.576s 00:07:22.674 18:38:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:22.674 18:38:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.674 18:38:23 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:07:22.674 18:38:23 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:22.674 18:38:23 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:22.674 18:38:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:22.674 ************************************ 00:07:22.674 START TEST raid_write_error_test 00:07:22.674 ************************************ 00:07:22.674 18:38:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 write 00:07:22.674 18:38:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:07:22.674 18:38:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:22.674 18:38:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:07:22.674 18:38:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:22.674 18:38:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:22.674 18:38:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:22.674 18:38:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:22.674 18:38:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:22.674 18:38:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:22.674 18:38:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:22.674 18:38:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:22.674 18:38:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:22.674 18:38:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:22.674 18:38:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:22.674 18:38:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:22.674 18:38:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:22.674 18:38:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:22.674 18:38:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:22.674 18:38:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:07:22.674 18:38:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:07:22.674 18:38:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:22.674 18:38:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.3V1E00i7Jq 00:07:22.674 18:38:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=76824 00:07:22.674 18:38:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:22.674 18:38:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 76824 00:07:22.674 18:38:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 76824 ']' 00:07:22.674 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:22.674 18:38:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:22.674 18:38:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:22.674 18:38:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:22.674 18:38:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:22.674 18:38:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.934 [2024-12-15 18:38:23.152940] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:07:22.934 [2024-12-15 18:38:23.153068] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76824 ] 00:07:22.934 [2024-12-15 18:38:23.321579] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.934 [2024-12-15 18:38:23.363661] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.194 [2024-12-15 18:38:23.442329] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:23.194 [2024-12-15 18:38:23.442484] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:23.764 18:38:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:23.764 18:38:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:07:23.764 18:38:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:23.764 18:38:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:23.764 18:38:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.764 18:38:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.764 BaseBdev1_malloc 00:07:23.764 18:38:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.764 18:38:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:23.764 18:38:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.764 18:38:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.764 true 00:07:23.764 18:38:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.764 18:38:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:23.764 18:38:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.764 18:38:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.764 [2024-12-15 18:38:24.020174] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:23.764 [2024-12-15 18:38:24.020246] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:23.764 [2024-12-15 18:38:24.020281] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:23.764 [2024-12-15 18:38:24.020290] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:23.764 [2024-12-15 18:38:24.022646] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:23.764 [2024-12-15 18:38:24.022679] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:23.764 BaseBdev1 00:07:23.764 18:38:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.764 18:38:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:23.764 18:38:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:23.764 18:38:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.764 18:38:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.764 BaseBdev2_malloc 00:07:23.764 18:38:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.764 18:38:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:23.764 18:38:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.764 18:38:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.764 true 00:07:23.764 18:38:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.764 18:38:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:23.764 18:38:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.764 18:38:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.764 [2024-12-15 18:38:24.066741] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:23.764 [2024-12-15 18:38:24.066789] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:23.764 [2024-12-15 18:38:24.066823] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:23.764 [2024-12-15 18:38:24.066832] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:23.764 [2024-12-15 18:38:24.069183] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:23.764 [2024-12-15 18:38:24.069217] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:23.764 BaseBdev2 00:07:23.764 18:38:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.764 18:38:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:23.764 18:38:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.765 18:38:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.765 [2024-12-15 18:38:24.078785] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:23.765 [2024-12-15 18:38:24.080958] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:23.765 [2024-12-15 18:38:24.081131] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:07:23.765 [2024-12-15 18:38:24.081145] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:23.765 [2024-12-15 18:38:24.081405] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:07:23.765 [2024-12-15 18:38:24.081552] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:07:23.765 [2024-12-15 18:38:24.081564] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:07:23.765 [2024-12-15 18:38:24.081694] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:23.765 18:38:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.765 18:38:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:07:23.765 18:38:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:23.765 18:38:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:23.765 18:38:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:23.765 18:38:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:23.765 18:38:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:23.765 18:38:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:23.765 18:38:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:23.765 18:38:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:23.765 18:38:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:23.765 18:38:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:23.765 18:38:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:23.765 18:38:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.765 18:38:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.765 18:38:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.765 18:38:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:23.765 "name": "raid_bdev1", 00:07:23.765 "uuid": "783215f5-1bdb-4299-9c56-ca772b2808a8", 00:07:23.765 "strip_size_kb": 0, 00:07:23.765 "state": "online", 00:07:23.765 "raid_level": "raid1", 00:07:23.765 "superblock": true, 00:07:23.765 "num_base_bdevs": 2, 00:07:23.765 "num_base_bdevs_discovered": 2, 00:07:23.765 "num_base_bdevs_operational": 2, 00:07:23.765 "base_bdevs_list": [ 00:07:23.765 { 00:07:23.765 "name": "BaseBdev1", 00:07:23.765 "uuid": "70d00016-bff2-568b-8c69-2fdec98e08f4", 00:07:23.765 "is_configured": true, 00:07:23.765 "data_offset": 2048, 00:07:23.765 "data_size": 63488 00:07:23.765 }, 00:07:23.765 { 00:07:23.765 "name": "BaseBdev2", 00:07:23.765 "uuid": "1074d36e-7d34-52fb-b141-70086ffcc6ac", 00:07:23.765 "is_configured": true, 00:07:23.765 "data_offset": 2048, 00:07:23.765 "data_size": 63488 00:07:23.765 } 00:07:23.765 ] 00:07:23.765 }' 00:07:23.765 18:38:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:23.765 18:38:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.334 18:38:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:24.334 18:38:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:24.334 [2024-12-15 18:38:24.634301] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:07:25.273 18:38:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:07:25.273 18:38:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.273 18:38:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.273 [2024-12-15 18:38:25.548832] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:07:25.273 [2024-12-15 18:38:25.548904] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:25.273 [2024-12-15 18:38:25.549135] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:07:25.273 18:38:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.274 18:38:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:25.274 18:38:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:07:25.274 18:38:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:07:25.274 18:38:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=1 00:07:25.274 18:38:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:07:25.274 18:38:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:25.274 18:38:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:25.274 18:38:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:25.274 18:38:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:25.274 18:38:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:25.274 18:38:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:25.274 18:38:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:25.274 18:38:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:25.274 18:38:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:25.274 18:38:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:25.274 18:38:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:25.274 18:38:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.274 18:38:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.274 18:38:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.274 18:38:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:25.274 "name": "raid_bdev1", 00:07:25.274 "uuid": "783215f5-1bdb-4299-9c56-ca772b2808a8", 00:07:25.274 "strip_size_kb": 0, 00:07:25.274 "state": "online", 00:07:25.274 "raid_level": "raid1", 00:07:25.274 "superblock": true, 00:07:25.274 "num_base_bdevs": 2, 00:07:25.274 "num_base_bdevs_discovered": 1, 00:07:25.274 "num_base_bdevs_operational": 1, 00:07:25.274 "base_bdevs_list": [ 00:07:25.274 { 00:07:25.274 "name": null, 00:07:25.274 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:25.274 "is_configured": false, 00:07:25.274 "data_offset": 0, 00:07:25.274 "data_size": 63488 00:07:25.274 }, 00:07:25.274 { 00:07:25.274 "name": "BaseBdev2", 00:07:25.274 "uuid": "1074d36e-7d34-52fb-b141-70086ffcc6ac", 00:07:25.274 "is_configured": true, 00:07:25.274 "data_offset": 2048, 00:07:25.274 "data_size": 63488 00:07:25.274 } 00:07:25.274 ] 00:07:25.274 }' 00:07:25.274 18:38:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:25.274 18:38:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.844 18:38:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:25.844 18:38:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.844 18:38:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.844 [2024-12-15 18:38:25.977790] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:25.844 [2024-12-15 18:38:25.977946] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:25.844 [2024-12-15 18:38:25.980599] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:25.844 [2024-12-15 18:38:25.980701] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:25.844 [2024-12-15 18:38:25.980780] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:25.844 [2024-12-15 18:38:25.980852] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:07:25.844 { 00:07:25.844 "results": [ 00:07:25.844 { 00:07:25.844 "job": "raid_bdev1", 00:07:25.844 "core_mask": "0x1", 00:07:25.844 "workload": "randrw", 00:07:25.844 "percentage": 50, 00:07:25.844 "status": "finished", 00:07:25.844 "queue_depth": 1, 00:07:25.844 "io_size": 131072, 00:07:25.844 "runtime": 1.344125, 00:07:25.844 "iops": 19805.449641960382, 00:07:25.844 "mibps": 2475.6812052450477, 00:07:25.844 "io_failed": 0, 00:07:25.844 "io_timeout": 0, 00:07:25.844 "avg_latency_us": 47.97800442865394, 00:07:25.844 "min_latency_us": 21.351965065502185, 00:07:25.844 "max_latency_us": 1287.825327510917 00:07:25.844 } 00:07:25.844 ], 00:07:25.844 "core_count": 1 00:07:25.844 } 00:07:25.844 18:38:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.844 18:38:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 76824 00:07:25.844 18:38:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 76824 ']' 00:07:25.844 18:38:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 76824 00:07:25.844 18:38:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:07:25.844 18:38:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:25.844 18:38:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76824 00:07:25.844 killing process with pid 76824 00:07:25.844 18:38:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:25.844 18:38:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:25.844 18:38:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76824' 00:07:25.844 18:38:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 76824 00:07:25.844 [2024-12-15 18:38:26.029980] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:25.844 18:38:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 76824 00:07:25.844 [2024-12-15 18:38:26.057082] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:26.107 18:38:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.3V1E00i7Jq 00:07:26.107 18:38:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:26.107 18:38:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:26.107 18:38:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:07:26.107 18:38:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:07:26.107 18:38:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:26.107 18:38:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:07:26.107 18:38:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:07:26.107 00:07:26.107 real 0m3.343s 00:07:26.107 user 0m4.140s 00:07:26.107 sys 0m0.584s 00:07:26.107 ************************************ 00:07:26.107 END TEST raid_write_error_test 00:07:26.107 ************************************ 00:07:26.107 18:38:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:26.107 18:38:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.107 18:38:26 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:07:26.107 18:38:26 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:26.107 18:38:26 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:07:26.107 18:38:26 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:26.107 18:38:26 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:26.107 18:38:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:26.107 ************************************ 00:07:26.107 START TEST raid_state_function_test 00:07:26.107 ************************************ 00:07:26.107 18:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 false 00:07:26.107 18:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:07:26.107 18:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:07:26.107 18:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:26.107 18:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:26.107 18:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:26.107 18:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:26.107 18:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:26.107 18:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:26.107 18:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:26.107 18:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:26.107 18:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:26.107 18:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:26.107 18:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:07:26.107 18:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:26.107 18:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:26.107 18:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:07:26.107 18:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:26.107 18:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:26.107 18:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:26.107 18:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:26.107 18:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:26.107 18:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:07:26.107 18:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:26.107 18:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:26.107 18:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:26.107 18:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:26.107 18:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=76956 00:07:26.107 Process raid pid: 76956 00:07:26.107 18:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:26.107 18:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 76956' 00:07:26.107 18:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 76956 00:07:26.107 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:26.107 18:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 76956 ']' 00:07:26.107 18:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:26.107 18:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:26.107 18:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:26.107 18:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:26.107 18:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.379 [2024-12-15 18:38:26.566796] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:07:26.379 [2024-12-15 18:38:26.567016] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:26.379 [2024-12-15 18:38:26.741789] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.379 [2024-12-15 18:38:26.779461] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.650 [2024-12-15 18:38:26.854906] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:26.650 [2024-12-15 18:38:26.854954] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:27.218 18:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:27.218 18:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:07:27.218 18:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:07:27.218 18:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.218 18:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.219 [2024-12-15 18:38:27.405030] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:27.219 [2024-12-15 18:38:27.405107] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:27.219 [2024-12-15 18:38:27.405118] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:27.219 [2024-12-15 18:38:27.405130] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:27.219 [2024-12-15 18:38:27.405136] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:07:27.219 [2024-12-15 18:38:27.405150] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:07:27.219 18:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.219 18:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:27.219 18:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:27.219 18:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:27.219 18:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:27.219 18:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:27.219 18:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:27.219 18:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:27.219 18:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:27.219 18:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:27.219 18:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:27.219 18:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:27.219 18:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:27.219 18:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.219 18:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.219 18:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.219 18:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:27.219 "name": "Existed_Raid", 00:07:27.219 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:27.219 "strip_size_kb": 64, 00:07:27.219 "state": "configuring", 00:07:27.219 "raid_level": "raid0", 00:07:27.219 "superblock": false, 00:07:27.219 "num_base_bdevs": 3, 00:07:27.219 "num_base_bdevs_discovered": 0, 00:07:27.219 "num_base_bdevs_operational": 3, 00:07:27.219 "base_bdevs_list": [ 00:07:27.219 { 00:07:27.219 "name": "BaseBdev1", 00:07:27.219 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:27.219 "is_configured": false, 00:07:27.219 "data_offset": 0, 00:07:27.219 "data_size": 0 00:07:27.219 }, 00:07:27.219 { 00:07:27.219 "name": "BaseBdev2", 00:07:27.219 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:27.219 "is_configured": false, 00:07:27.219 "data_offset": 0, 00:07:27.219 "data_size": 0 00:07:27.219 }, 00:07:27.219 { 00:07:27.219 "name": "BaseBdev3", 00:07:27.219 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:27.219 "is_configured": false, 00:07:27.219 "data_offset": 0, 00:07:27.219 "data_size": 0 00:07:27.219 } 00:07:27.219 ] 00:07:27.219 }' 00:07:27.219 18:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:27.219 18:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.479 18:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:27.479 18:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.479 18:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.479 [2024-12-15 18:38:27.868170] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:27.479 [2024-12-15 18:38:27.868327] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:07:27.479 18:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.479 18:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:07:27.479 18:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.479 18:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.479 [2024-12-15 18:38:27.880136] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:27.479 [2024-12-15 18:38:27.880221] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:27.479 [2024-12-15 18:38:27.880257] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:27.479 [2024-12-15 18:38:27.880280] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:27.479 [2024-12-15 18:38:27.880297] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:07:27.479 [2024-12-15 18:38:27.880317] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:07:27.479 18:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.479 18:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:27.479 18:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.479 18:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.479 [2024-12-15 18:38:27.907414] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:27.479 BaseBdev1 00:07:27.479 18:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.479 18:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:27.479 18:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:27.479 18:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:27.479 18:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:27.479 18:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:27.479 18:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:27.479 18:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:27.479 18:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.479 18:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.740 18:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.740 18:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:27.740 18:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.740 18:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.740 [ 00:07:27.740 { 00:07:27.740 "name": "BaseBdev1", 00:07:27.740 "aliases": [ 00:07:27.740 "33d19959-08e2-47df-bdf0-1c203741338c" 00:07:27.740 ], 00:07:27.740 "product_name": "Malloc disk", 00:07:27.740 "block_size": 512, 00:07:27.740 "num_blocks": 65536, 00:07:27.740 "uuid": "33d19959-08e2-47df-bdf0-1c203741338c", 00:07:27.740 "assigned_rate_limits": { 00:07:27.740 "rw_ios_per_sec": 0, 00:07:27.740 "rw_mbytes_per_sec": 0, 00:07:27.740 "r_mbytes_per_sec": 0, 00:07:27.740 "w_mbytes_per_sec": 0 00:07:27.740 }, 00:07:27.740 "claimed": true, 00:07:27.740 "claim_type": "exclusive_write", 00:07:27.740 "zoned": false, 00:07:27.740 "supported_io_types": { 00:07:27.740 "read": true, 00:07:27.740 "write": true, 00:07:27.740 "unmap": true, 00:07:27.740 "flush": true, 00:07:27.740 "reset": true, 00:07:27.740 "nvme_admin": false, 00:07:27.740 "nvme_io": false, 00:07:27.740 "nvme_io_md": false, 00:07:27.740 "write_zeroes": true, 00:07:27.740 "zcopy": true, 00:07:27.740 "get_zone_info": false, 00:07:27.740 "zone_management": false, 00:07:27.740 "zone_append": false, 00:07:27.740 "compare": false, 00:07:27.740 "compare_and_write": false, 00:07:27.740 "abort": true, 00:07:27.740 "seek_hole": false, 00:07:27.740 "seek_data": false, 00:07:27.740 "copy": true, 00:07:27.740 "nvme_iov_md": false 00:07:27.740 }, 00:07:27.740 "memory_domains": [ 00:07:27.740 { 00:07:27.740 "dma_device_id": "system", 00:07:27.740 "dma_device_type": 1 00:07:27.740 }, 00:07:27.740 { 00:07:27.740 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:27.740 "dma_device_type": 2 00:07:27.740 } 00:07:27.740 ], 00:07:27.740 "driver_specific": {} 00:07:27.740 } 00:07:27.740 ] 00:07:27.740 18:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.740 18:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:27.740 18:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:27.740 18:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:27.740 18:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:27.740 18:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:27.740 18:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:27.740 18:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:27.740 18:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:27.740 18:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:27.740 18:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:27.740 18:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:27.740 18:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:27.740 18:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:27.740 18:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.740 18:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.740 18:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.740 18:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:27.740 "name": "Existed_Raid", 00:07:27.740 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:27.740 "strip_size_kb": 64, 00:07:27.740 "state": "configuring", 00:07:27.740 "raid_level": "raid0", 00:07:27.740 "superblock": false, 00:07:27.740 "num_base_bdevs": 3, 00:07:27.740 "num_base_bdevs_discovered": 1, 00:07:27.740 "num_base_bdevs_operational": 3, 00:07:27.740 "base_bdevs_list": [ 00:07:27.740 { 00:07:27.740 "name": "BaseBdev1", 00:07:27.740 "uuid": "33d19959-08e2-47df-bdf0-1c203741338c", 00:07:27.740 "is_configured": true, 00:07:27.740 "data_offset": 0, 00:07:27.740 "data_size": 65536 00:07:27.740 }, 00:07:27.740 { 00:07:27.740 "name": "BaseBdev2", 00:07:27.740 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:27.740 "is_configured": false, 00:07:27.740 "data_offset": 0, 00:07:27.740 "data_size": 0 00:07:27.740 }, 00:07:27.740 { 00:07:27.740 "name": "BaseBdev3", 00:07:27.740 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:27.740 "is_configured": false, 00:07:27.740 "data_offset": 0, 00:07:27.740 "data_size": 0 00:07:27.740 } 00:07:27.740 ] 00:07:27.740 }' 00:07:27.740 18:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:27.740 18:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.000 18:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:28.000 18:38:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.000 18:38:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.000 [2024-12-15 18:38:28.366728] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:28.000 [2024-12-15 18:38:28.366891] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:07:28.000 18:38:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.000 18:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:07:28.000 18:38:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.000 18:38:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.000 [2024-12-15 18:38:28.378719] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:28.000 [2024-12-15 18:38:28.380969] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:28.000 [2024-12-15 18:38:28.381052] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:28.000 [2024-12-15 18:38:28.381083] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:07:28.000 [2024-12-15 18:38:28.381107] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:07:28.000 18:38:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.000 18:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:28.000 18:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:28.000 18:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:28.000 18:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:28.000 18:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:28.000 18:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:28.000 18:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:28.000 18:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:28.000 18:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:28.000 18:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:28.000 18:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:28.000 18:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:28.000 18:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:28.000 18:38:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.000 18:38:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.000 18:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:28.000 18:38:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.260 18:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:28.260 "name": "Existed_Raid", 00:07:28.260 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:28.260 "strip_size_kb": 64, 00:07:28.260 "state": "configuring", 00:07:28.260 "raid_level": "raid0", 00:07:28.260 "superblock": false, 00:07:28.260 "num_base_bdevs": 3, 00:07:28.260 "num_base_bdevs_discovered": 1, 00:07:28.260 "num_base_bdevs_operational": 3, 00:07:28.260 "base_bdevs_list": [ 00:07:28.260 { 00:07:28.260 "name": "BaseBdev1", 00:07:28.260 "uuid": "33d19959-08e2-47df-bdf0-1c203741338c", 00:07:28.260 "is_configured": true, 00:07:28.260 "data_offset": 0, 00:07:28.260 "data_size": 65536 00:07:28.260 }, 00:07:28.260 { 00:07:28.260 "name": "BaseBdev2", 00:07:28.260 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:28.260 "is_configured": false, 00:07:28.260 "data_offset": 0, 00:07:28.260 "data_size": 0 00:07:28.260 }, 00:07:28.260 { 00:07:28.260 "name": "BaseBdev3", 00:07:28.260 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:28.260 "is_configured": false, 00:07:28.260 "data_offset": 0, 00:07:28.260 "data_size": 0 00:07:28.260 } 00:07:28.260 ] 00:07:28.260 }' 00:07:28.260 18:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:28.260 18:38:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.520 18:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:28.520 18:38:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.520 18:38:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.520 [2024-12-15 18:38:28.806751] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:28.520 BaseBdev2 00:07:28.520 18:38:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.520 18:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:28.520 18:38:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:28.520 18:38:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:28.520 18:38:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:28.520 18:38:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:28.520 18:38:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:28.520 18:38:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:28.520 18:38:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.520 18:38:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.520 18:38:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.520 18:38:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:28.520 18:38:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.520 18:38:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.520 [ 00:07:28.520 { 00:07:28.520 "name": "BaseBdev2", 00:07:28.520 "aliases": [ 00:07:28.520 "36f71bbf-6cae-4ada-8d5d-5dce66bfd3ad" 00:07:28.520 ], 00:07:28.520 "product_name": "Malloc disk", 00:07:28.520 "block_size": 512, 00:07:28.520 "num_blocks": 65536, 00:07:28.520 "uuid": "36f71bbf-6cae-4ada-8d5d-5dce66bfd3ad", 00:07:28.520 "assigned_rate_limits": { 00:07:28.520 "rw_ios_per_sec": 0, 00:07:28.520 "rw_mbytes_per_sec": 0, 00:07:28.520 "r_mbytes_per_sec": 0, 00:07:28.520 "w_mbytes_per_sec": 0 00:07:28.520 }, 00:07:28.520 "claimed": true, 00:07:28.520 "claim_type": "exclusive_write", 00:07:28.520 "zoned": false, 00:07:28.520 "supported_io_types": { 00:07:28.520 "read": true, 00:07:28.520 "write": true, 00:07:28.520 "unmap": true, 00:07:28.520 "flush": true, 00:07:28.520 "reset": true, 00:07:28.520 "nvme_admin": false, 00:07:28.520 "nvme_io": false, 00:07:28.520 "nvme_io_md": false, 00:07:28.520 "write_zeroes": true, 00:07:28.520 "zcopy": true, 00:07:28.520 "get_zone_info": false, 00:07:28.520 "zone_management": false, 00:07:28.520 "zone_append": false, 00:07:28.520 "compare": false, 00:07:28.520 "compare_and_write": false, 00:07:28.520 "abort": true, 00:07:28.520 "seek_hole": false, 00:07:28.520 "seek_data": false, 00:07:28.520 "copy": true, 00:07:28.520 "nvme_iov_md": false 00:07:28.520 }, 00:07:28.520 "memory_domains": [ 00:07:28.520 { 00:07:28.520 "dma_device_id": "system", 00:07:28.520 "dma_device_type": 1 00:07:28.520 }, 00:07:28.520 { 00:07:28.520 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:28.520 "dma_device_type": 2 00:07:28.520 } 00:07:28.520 ], 00:07:28.520 "driver_specific": {} 00:07:28.520 } 00:07:28.520 ] 00:07:28.520 18:38:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.520 18:38:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:28.520 18:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:28.520 18:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:28.520 18:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:28.520 18:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:28.520 18:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:28.520 18:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:28.520 18:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:28.520 18:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:28.520 18:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:28.520 18:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:28.520 18:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:28.520 18:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:28.520 18:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:28.520 18:38:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.520 18:38:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.520 18:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:28.520 18:38:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.520 18:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:28.520 "name": "Existed_Raid", 00:07:28.520 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:28.520 "strip_size_kb": 64, 00:07:28.520 "state": "configuring", 00:07:28.520 "raid_level": "raid0", 00:07:28.520 "superblock": false, 00:07:28.520 "num_base_bdevs": 3, 00:07:28.520 "num_base_bdevs_discovered": 2, 00:07:28.520 "num_base_bdevs_operational": 3, 00:07:28.520 "base_bdevs_list": [ 00:07:28.520 { 00:07:28.520 "name": "BaseBdev1", 00:07:28.520 "uuid": "33d19959-08e2-47df-bdf0-1c203741338c", 00:07:28.520 "is_configured": true, 00:07:28.520 "data_offset": 0, 00:07:28.520 "data_size": 65536 00:07:28.520 }, 00:07:28.520 { 00:07:28.520 "name": "BaseBdev2", 00:07:28.520 "uuid": "36f71bbf-6cae-4ada-8d5d-5dce66bfd3ad", 00:07:28.520 "is_configured": true, 00:07:28.520 "data_offset": 0, 00:07:28.520 "data_size": 65536 00:07:28.520 }, 00:07:28.520 { 00:07:28.520 "name": "BaseBdev3", 00:07:28.520 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:28.520 "is_configured": false, 00:07:28.520 "data_offset": 0, 00:07:28.520 "data_size": 0 00:07:28.520 } 00:07:28.520 ] 00:07:28.520 }' 00:07:28.520 18:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:28.520 18:38:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.091 18:38:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:07:29.091 18:38:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.091 18:38:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.091 [2024-12-15 18:38:29.362121] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:07:29.091 [2024-12-15 18:38:29.362264] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:07:29.091 [2024-12-15 18:38:29.362321] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:07:29.091 [2024-12-15 18:38:29.362743] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:07:29.091 [2024-12-15 18:38:29.363043] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:07:29.091 [2024-12-15 18:38:29.363098] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:07:29.091 [2024-12-15 18:38:29.363429] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:29.091 BaseBdev3 00:07:29.091 18:38:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.091 18:38:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:07:29.091 18:38:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:07:29.091 18:38:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:29.091 18:38:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:29.091 18:38:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:29.091 18:38:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:29.091 18:38:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:29.091 18:38:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.091 18:38:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.091 18:38:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.091 18:38:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:07:29.091 18:38:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.091 18:38:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.091 [ 00:07:29.091 { 00:07:29.091 "name": "BaseBdev3", 00:07:29.091 "aliases": [ 00:07:29.091 "b1380660-a652-4143-ad5a-4f7f5fe7058a" 00:07:29.091 ], 00:07:29.091 "product_name": "Malloc disk", 00:07:29.091 "block_size": 512, 00:07:29.091 "num_blocks": 65536, 00:07:29.092 "uuid": "b1380660-a652-4143-ad5a-4f7f5fe7058a", 00:07:29.092 "assigned_rate_limits": { 00:07:29.092 "rw_ios_per_sec": 0, 00:07:29.092 "rw_mbytes_per_sec": 0, 00:07:29.092 "r_mbytes_per_sec": 0, 00:07:29.092 "w_mbytes_per_sec": 0 00:07:29.092 }, 00:07:29.092 "claimed": true, 00:07:29.092 "claim_type": "exclusive_write", 00:07:29.092 "zoned": false, 00:07:29.092 "supported_io_types": { 00:07:29.092 "read": true, 00:07:29.092 "write": true, 00:07:29.092 "unmap": true, 00:07:29.092 "flush": true, 00:07:29.092 "reset": true, 00:07:29.092 "nvme_admin": false, 00:07:29.092 "nvme_io": false, 00:07:29.092 "nvme_io_md": false, 00:07:29.092 "write_zeroes": true, 00:07:29.092 "zcopy": true, 00:07:29.092 "get_zone_info": false, 00:07:29.092 "zone_management": false, 00:07:29.092 "zone_append": false, 00:07:29.092 "compare": false, 00:07:29.092 "compare_and_write": false, 00:07:29.092 "abort": true, 00:07:29.092 "seek_hole": false, 00:07:29.092 "seek_data": false, 00:07:29.092 "copy": true, 00:07:29.092 "nvme_iov_md": false 00:07:29.092 }, 00:07:29.092 "memory_domains": [ 00:07:29.092 { 00:07:29.092 "dma_device_id": "system", 00:07:29.092 "dma_device_type": 1 00:07:29.092 }, 00:07:29.092 { 00:07:29.092 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:29.092 "dma_device_type": 2 00:07:29.092 } 00:07:29.092 ], 00:07:29.092 "driver_specific": {} 00:07:29.092 } 00:07:29.092 ] 00:07:29.092 18:38:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.092 18:38:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:29.092 18:38:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:29.092 18:38:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:29.092 18:38:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:07:29.092 18:38:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:29.092 18:38:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:29.092 18:38:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:29.092 18:38:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:29.092 18:38:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:29.092 18:38:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:29.092 18:38:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:29.092 18:38:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:29.092 18:38:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:29.092 18:38:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:29.092 18:38:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:29.092 18:38:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.092 18:38:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.092 18:38:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.092 18:38:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:29.092 "name": "Existed_Raid", 00:07:29.092 "uuid": "480144dd-d49f-4a7e-b0ff-2ba117e83245", 00:07:29.092 "strip_size_kb": 64, 00:07:29.092 "state": "online", 00:07:29.092 "raid_level": "raid0", 00:07:29.092 "superblock": false, 00:07:29.092 "num_base_bdevs": 3, 00:07:29.092 "num_base_bdevs_discovered": 3, 00:07:29.092 "num_base_bdevs_operational": 3, 00:07:29.092 "base_bdevs_list": [ 00:07:29.092 { 00:07:29.092 "name": "BaseBdev1", 00:07:29.092 "uuid": "33d19959-08e2-47df-bdf0-1c203741338c", 00:07:29.092 "is_configured": true, 00:07:29.092 "data_offset": 0, 00:07:29.092 "data_size": 65536 00:07:29.092 }, 00:07:29.092 { 00:07:29.092 "name": "BaseBdev2", 00:07:29.092 "uuid": "36f71bbf-6cae-4ada-8d5d-5dce66bfd3ad", 00:07:29.092 "is_configured": true, 00:07:29.092 "data_offset": 0, 00:07:29.092 "data_size": 65536 00:07:29.092 }, 00:07:29.092 { 00:07:29.092 "name": "BaseBdev3", 00:07:29.092 "uuid": "b1380660-a652-4143-ad5a-4f7f5fe7058a", 00:07:29.092 "is_configured": true, 00:07:29.092 "data_offset": 0, 00:07:29.092 "data_size": 65536 00:07:29.092 } 00:07:29.092 ] 00:07:29.092 }' 00:07:29.092 18:38:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:29.092 18:38:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.661 18:38:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:29.661 18:38:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:29.661 18:38:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:29.661 18:38:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:29.661 18:38:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:29.661 18:38:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:29.661 18:38:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:29.661 18:38:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.661 18:38:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:29.661 18:38:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.661 [2024-12-15 18:38:29.841724] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:29.661 18:38:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.661 18:38:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:29.661 "name": "Existed_Raid", 00:07:29.661 "aliases": [ 00:07:29.661 "480144dd-d49f-4a7e-b0ff-2ba117e83245" 00:07:29.661 ], 00:07:29.661 "product_name": "Raid Volume", 00:07:29.661 "block_size": 512, 00:07:29.661 "num_blocks": 196608, 00:07:29.661 "uuid": "480144dd-d49f-4a7e-b0ff-2ba117e83245", 00:07:29.661 "assigned_rate_limits": { 00:07:29.661 "rw_ios_per_sec": 0, 00:07:29.661 "rw_mbytes_per_sec": 0, 00:07:29.661 "r_mbytes_per_sec": 0, 00:07:29.661 "w_mbytes_per_sec": 0 00:07:29.661 }, 00:07:29.661 "claimed": false, 00:07:29.661 "zoned": false, 00:07:29.661 "supported_io_types": { 00:07:29.661 "read": true, 00:07:29.661 "write": true, 00:07:29.661 "unmap": true, 00:07:29.661 "flush": true, 00:07:29.661 "reset": true, 00:07:29.661 "nvme_admin": false, 00:07:29.661 "nvme_io": false, 00:07:29.661 "nvme_io_md": false, 00:07:29.661 "write_zeroes": true, 00:07:29.661 "zcopy": false, 00:07:29.661 "get_zone_info": false, 00:07:29.661 "zone_management": false, 00:07:29.661 "zone_append": false, 00:07:29.661 "compare": false, 00:07:29.661 "compare_and_write": false, 00:07:29.661 "abort": false, 00:07:29.661 "seek_hole": false, 00:07:29.661 "seek_data": false, 00:07:29.661 "copy": false, 00:07:29.661 "nvme_iov_md": false 00:07:29.661 }, 00:07:29.661 "memory_domains": [ 00:07:29.661 { 00:07:29.661 "dma_device_id": "system", 00:07:29.661 "dma_device_type": 1 00:07:29.661 }, 00:07:29.661 { 00:07:29.661 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:29.661 "dma_device_type": 2 00:07:29.661 }, 00:07:29.661 { 00:07:29.661 "dma_device_id": "system", 00:07:29.661 "dma_device_type": 1 00:07:29.661 }, 00:07:29.661 { 00:07:29.661 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:29.661 "dma_device_type": 2 00:07:29.661 }, 00:07:29.661 { 00:07:29.661 "dma_device_id": "system", 00:07:29.661 "dma_device_type": 1 00:07:29.661 }, 00:07:29.661 { 00:07:29.661 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:29.661 "dma_device_type": 2 00:07:29.661 } 00:07:29.661 ], 00:07:29.661 "driver_specific": { 00:07:29.661 "raid": { 00:07:29.662 "uuid": "480144dd-d49f-4a7e-b0ff-2ba117e83245", 00:07:29.662 "strip_size_kb": 64, 00:07:29.662 "state": "online", 00:07:29.662 "raid_level": "raid0", 00:07:29.662 "superblock": false, 00:07:29.662 "num_base_bdevs": 3, 00:07:29.662 "num_base_bdevs_discovered": 3, 00:07:29.662 "num_base_bdevs_operational": 3, 00:07:29.662 "base_bdevs_list": [ 00:07:29.662 { 00:07:29.662 "name": "BaseBdev1", 00:07:29.662 "uuid": "33d19959-08e2-47df-bdf0-1c203741338c", 00:07:29.662 "is_configured": true, 00:07:29.662 "data_offset": 0, 00:07:29.662 "data_size": 65536 00:07:29.662 }, 00:07:29.662 { 00:07:29.662 "name": "BaseBdev2", 00:07:29.662 "uuid": "36f71bbf-6cae-4ada-8d5d-5dce66bfd3ad", 00:07:29.662 "is_configured": true, 00:07:29.662 "data_offset": 0, 00:07:29.662 "data_size": 65536 00:07:29.662 }, 00:07:29.662 { 00:07:29.662 "name": "BaseBdev3", 00:07:29.662 "uuid": "b1380660-a652-4143-ad5a-4f7f5fe7058a", 00:07:29.662 "is_configured": true, 00:07:29.662 "data_offset": 0, 00:07:29.662 "data_size": 65536 00:07:29.662 } 00:07:29.662 ] 00:07:29.662 } 00:07:29.662 } 00:07:29.662 }' 00:07:29.662 18:38:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:29.662 18:38:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:29.662 BaseBdev2 00:07:29.662 BaseBdev3' 00:07:29.662 18:38:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:29.662 18:38:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:29.662 18:38:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:29.662 18:38:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:29.662 18:38:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:29.662 18:38:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.662 18:38:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.662 18:38:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.662 18:38:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:29.662 18:38:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:29.662 18:38:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:29.662 18:38:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:29.662 18:38:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.662 18:38:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.662 18:38:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:29.662 18:38:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.662 18:38:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:29.662 18:38:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:29.662 18:38:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:29.662 18:38:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:29.662 18:38:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:07:29.662 18:38:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.662 18:38:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.662 18:38:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.662 18:38:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:29.662 18:38:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:29.662 18:38:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:29.662 18:38:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.662 18:38:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.662 [2024-12-15 18:38:30.096961] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:29.662 [2024-12-15 18:38:30.097030] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:29.662 [2024-12-15 18:38:30.097120] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:29.921 18:38:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.921 18:38:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:29.921 18:38:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:07:29.921 18:38:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:29.921 18:38:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:29.921 18:38:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:29.921 18:38:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:07:29.921 18:38:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:29.921 18:38:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:29.921 18:38:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:29.921 18:38:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:29.921 18:38:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:29.921 18:38:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:29.921 18:38:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:29.921 18:38:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:29.921 18:38:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:29.921 18:38:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:29.921 18:38:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:29.921 18:38:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.921 18:38:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.921 18:38:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.921 18:38:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:29.921 "name": "Existed_Raid", 00:07:29.921 "uuid": "480144dd-d49f-4a7e-b0ff-2ba117e83245", 00:07:29.921 "strip_size_kb": 64, 00:07:29.921 "state": "offline", 00:07:29.921 "raid_level": "raid0", 00:07:29.921 "superblock": false, 00:07:29.921 "num_base_bdevs": 3, 00:07:29.921 "num_base_bdevs_discovered": 2, 00:07:29.921 "num_base_bdevs_operational": 2, 00:07:29.921 "base_bdevs_list": [ 00:07:29.921 { 00:07:29.921 "name": null, 00:07:29.921 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:29.921 "is_configured": false, 00:07:29.921 "data_offset": 0, 00:07:29.921 "data_size": 65536 00:07:29.921 }, 00:07:29.921 { 00:07:29.922 "name": "BaseBdev2", 00:07:29.922 "uuid": "36f71bbf-6cae-4ada-8d5d-5dce66bfd3ad", 00:07:29.922 "is_configured": true, 00:07:29.922 "data_offset": 0, 00:07:29.922 "data_size": 65536 00:07:29.922 }, 00:07:29.922 { 00:07:29.922 "name": "BaseBdev3", 00:07:29.922 "uuid": "b1380660-a652-4143-ad5a-4f7f5fe7058a", 00:07:29.922 "is_configured": true, 00:07:29.922 "data_offset": 0, 00:07:29.922 "data_size": 65536 00:07:29.922 } 00:07:29.922 ] 00:07:29.922 }' 00:07:29.922 18:38:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:29.922 18:38:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.181 18:38:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:30.181 18:38:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:30.181 18:38:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:30.181 18:38:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:30.181 18:38:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.181 18:38:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.181 18:38:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.182 18:38:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:30.182 18:38:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:30.182 18:38:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:30.182 18:38:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.182 18:38:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.182 [2024-12-15 18:38:30.588608] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:30.182 18:38:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.182 18:38:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:30.182 18:38:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:30.182 18:38:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:30.182 18:38:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:30.182 18:38:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.182 18:38:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.442 18:38:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.442 18:38:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:30.442 18:38:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:30.442 18:38:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:07:30.442 18:38:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.442 18:38:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.442 [2024-12-15 18:38:30.668480] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:07:30.442 [2024-12-15 18:38:30.668597] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:07:30.442 18:38:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.442 18:38:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:30.442 18:38:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:30.442 18:38:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:30.442 18:38:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:30.442 18:38:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.442 18:38:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.442 18:38:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.442 18:38:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:30.442 18:38:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:30.442 18:38:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:07:30.442 18:38:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:07:30.442 18:38:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:07:30.442 18:38:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:30.442 18:38:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.442 18:38:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.442 BaseBdev2 00:07:30.442 18:38:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.442 18:38:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:07:30.442 18:38:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:30.442 18:38:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:30.442 18:38:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:30.442 18:38:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:30.442 18:38:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:30.442 18:38:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:30.442 18:38:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.442 18:38:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.442 18:38:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.442 18:38:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:30.442 18:38:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.442 18:38:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.442 [ 00:07:30.442 { 00:07:30.442 "name": "BaseBdev2", 00:07:30.442 "aliases": [ 00:07:30.442 "66d9f892-1ccb-42bf-b345-9e3566e522af" 00:07:30.442 ], 00:07:30.442 "product_name": "Malloc disk", 00:07:30.442 "block_size": 512, 00:07:30.442 "num_blocks": 65536, 00:07:30.442 "uuid": "66d9f892-1ccb-42bf-b345-9e3566e522af", 00:07:30.442 "assigned_rate_limits": { 00:07:30.442 "rw_ios_per_sec": 0, 00:07:30.442 "rw_mbytes_per_sec": 0, 00:07:30.442 "r_mbytes_per_sec": 0, 00:07:30.442 "w_mbytes_per_sec": 0 00:07:30.442 }, 00:07:30.442 "claimed": false, 00:07:30.442 "zoned": false, 00:07:30.442 "supported_io_types": { 00:07:30.442 "read": true, 00:07:30.442 "write": true, 00:07:30.442 "unmap": true, 00:07:30.442 "flush": true, 00:07:30.442 "reset": true, 00:07:30.442 "nvme_admin": false, 00:07:30.442 "nvme_io": false, 00:07:30.442 "nvme_io_md": false, 00:07:30.442 "write_zeroes": true, 00:07:30.442 "zcopy": true, 00:07:30.442 "get_zone_info": false, 00:07:30.442 "zone_management": false, 00:07:30.442 "zone_append": false, 00:07:30.442 "compare": false, 00:07:30.442 "compare_and_write": false, 00:07:30.442 "abort": true, 00:07:30.442 "seek_hole": false, 00:07:30.442 "seek_data": false, 00:07:30.442 "copy": true, 00:07:30.442 "nvme_iov_md": false 00:07:30.442 }, 00:07:30.442 "memory_domains": [ 00:07:30.442 { 00:07:30.442 "dma_device_id": "system", 00:07:30.442 "dma_device_type": 1 00:07:30.442 }, 00:07:30.442 { 00:07:30.442 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:30.442 "dma_device_type": 2 00:07:30.442 } 00:07:30.442 ], 00:07:30.442 "driver_specific": {} 00:07:30.442 } 00:07:30.442 ] 00:07:30.442 18:38:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.442 18:38:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:30.442 18:38:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:07:30.443 18:38:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:07:30.443 18:38:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:07:30.443 18:38:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.443 18:38:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.443 BaseBdev3 00:07:30.443 18:38:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.443 18:38:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:07:30.443 18:38:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:07:30.443 18:38:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:30.443 18:38:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:30.443 18:38:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:30.443 18:38:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:30.443 18:38:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:30.443 18:38:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.443 18:38:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.443 18:38:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.443 18:38:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:07:30.443 18:38:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.443 18:38:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.443 [ 00:07:30.443 { 00:07:30.443 "name": "BaseBdev3", 00:07:30.443 "aliases": [ 00:07:30.443 "2e3a294a-a339-407a-ad92-7a34fbe6b305" 00:07:30.443 ], 00:07:30.443 "product_name": "Malloc disk", 00:07:30.443 "block_size": 512, 00:07:30.443 "num_blocks": 65536, 00:07:30.443 "uuid": "2e3a294a-a339-407a-ad92-7a34fbe6b305", 00:07:30.443 "assigned_rate_limits": { 00:07:30.443 "rw_ios_per_sec": 0, 00:07:30.443 "rw_mbytes_per_sec": 0, 00:07:30.443 "r_mbytes_per_sec": 0, 00:07:30.443 "w_mbytes_per_sec": 0 00:07:30.443 }, 00:07:30.443 "claimed": false, 00:07:30.443 "zoned": false, 00:07:30.443 "supported_io_types": { 00:07:30.443 "read": true, 00:07:30.443 "write": true, 00:07:30.443 "unmap": true, 00:07:30.443 "flush": true, 00:07:30.443 "reset": true, 00:07:30.443 "nvme_admin": false, 00:07:30.443 "nvme_io": false, 00:07:30.443 "nvme_io_md": false, 00:07:30.443 "write_zeroes": true, 00:07:30.443 "zcopy": true, 00:07:30.443 "get_zone_info": false, 00:07:30.443 "zone_management": false, 00:07:30.443 "zone_append": false, 00:07:30.443 "compare": false, 00:07:30.443 "compare_and_write": false, 00:07:30.443 "abort": true, 00:07:30.443 "seek_hole": false, 00:07:30.443 "seek_data": false, 00:07:30.443 "copy": true, 00:07:30.443 "nvme_iov_md": false 00:07:30.443 }, 00:07:30.443 "memory_domains": [ 00:07:30.443 { 00:07:30.443 "dma_device_id": "system", 00:07:30.443 "dma_device_type": 1 00:07:30.443 }, 00:07:30.443 { 00:07:30.443 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:30.443 "dma_device_type": 2 00:07:30.443 } 00:07:30.443 ], 00:07:30.443 "driver_specific": {} 00:07:30.443 } 00:07:30.443 ] 00:07:30.443 18:38:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.443 18:38:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:30.443 18:38:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:07:30.443 18:38:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:07:30.443 18:38:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:07:30.443 18:38:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.443 18:38:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.443 [2024-12-15 18:38:30.868394] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:30.443 [2024-12-15 18:38:30.868538] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:30.443 [2024-12-15 18:38:30.868583] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:30.443 [2024-12-15 18:38:30.870713] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:07:30.443 18:38:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.443 18:38:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:30.443 18:38:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:30.443 18:38:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:30.443 18:38:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:30.443 18:38:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:30.443 18:38:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:30.443 18:38:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:30.443 18:38:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:30.443 18:38:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:30.443 18:38:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:30.703 18:38:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:30.703 18:38:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:30.703 18:38:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.703 18:38:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.703 18:38:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.703 18:38:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:30.703 "name": "Existed_Raid", 00:07:30.703 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:30.703 "strip_size_kb": 64, 00:07:30.703 "state": "configuring", 00:07:30.703 "raid_level": "raid0", 00:07:30.703 "superblock": false, 00:07:30.703 "num_base_bdevs": 3, 00:07:30.703 "num_base_bdevs_discovered": 2, 00:07:30.703 "num_base_bdevs_operational": 3, 00:07:30.703 "base_bdevs_list": [ 00:07:30.703 { 00:07:30.703 "name": "BaseBdev1", 00:07:30.703 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:30.703 "is_configured": false, 00:07:30.703 "data_offset": 0, 00:07:30.703 "data_size": 0 00:07:30.703 }, 00:07:30.703 { 00:07:30.703 "name": "BaseBdev2", 00:07:30.703 "uuid": "66d9f892-1ccb-42bf-b345-9e3566e522af", 00:07:30.703 "is_configured": true, 00:07:30.703 "data_offset": 0, 00:07:30.703 "data_size": 65536 00:07:30.703 }, 00:07:30.703 { 00:07:30.703 "name": "BaseBdev3", 00:07:30.703 "uuid": "2e3a294a-a339-407a-ad92-7a34fbe6b305", 00:07:30.703 "is_configured": true, 00:07:30.703 "data_offset": 0, 00:07:30.703 "data_size": 65536 00:07:30.703 } 00:07:30.703 ] 00:07:30.703 }' 00:07:30.703 18:38:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:30.703 18:38:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.962 18:38:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:07:30.962 18:38:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.962 18:38:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.962 [2024-12-15 18:38:31.319710] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:30.962 18:38:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.962 18:38:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:30.962 18:38:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:30.962 18:38:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:30.962 18:38:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:30.962 18:38:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:30.962 18:38:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:30.962 18:38:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:30.962 18:38:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:30.962 18:38:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:30.962 18:38:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:30.962 18:38:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:30.962 18:38:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:30.962 18:38:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.962 18:38:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.962 18:38:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.962 18:38:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:30.962 "name": "Existed_Raid", 00:07:30.962 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:30.962 "strip_size_kb": 64, 00:07:30.962 "state": "configuring", 00:07:30.962 "raid_level": "raid0", 00:07:30.962 "superblock": false, 00:07:30.962 "num_base_bdevs": 3, 00:07:30.962 "num_base_bdevs_discovered": 1, 00:07:30.962 "num_base_bdevs_operational": 3, 00:07:30.962 "base_bdevs_list": [ 00:07:30.962 { 00:07:30.962 "name": "BaseBdev1", 00:07:30.962 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:30.962 "is_configured": false, 00:07:30.962 "data_offset": 0, 00:07:30.962 "data_size": 0 00:07:30.962 }, 00:07:30.962 { 00:07:30.962 "name": null, 00:07:30.962 "uuid": "66d9f892-1ccb-42bf-b345-9e3566e522af", 00:07:30.962 "is_configured": false, 00:07:30.962 "data_offset": 0, 00:07:30.962 "data_size": 65536 00:07:30.962 }, 00:07:30.962 { 00:07:30.962 "name": "BaseBdev3", 00:07:30.962 "uuid": "2e3a294a-a339-407a-ad92-7a34fbe6b305", 00:07:30.962 "is_configured": true, 00:07:30.962 "data_offset": 0, 00:07:30.962 "data_size": 65536 00:07:30.962 } 00:07:30.962 ] 00:07:30.962 }' 00:07:30.962 18:38:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:30.962 18:38:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.531 18:38:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:31.531 18:38:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.531 18:38:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.531 18:38:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:07:31.531 18:38:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.531 18:38:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:07:31.531 18:38:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:31.531 18:38:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.531 18:38:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.531 [2024-12-15 18:38:31.811606] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:31.531 BaseBdev1 00:07:31.531 18:38:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.531 18:38:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:07:31.531 18:38:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:31.531 18:38:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:31.531 18:38:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:31.531 18:38:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:31.531 18:38:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:31.531 18:38:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:31.531 18:38:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.531 18:38:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.531 18:38:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.531 18:38:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:31.531 18:38:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.531 18:38:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.531 [ 00:07:31.531 { 00:07:31.531 "name": "BaseBdev1", 00:07:31.531 "aliases": [ 00:07:31.531 "f0b30d1b-0651-4bfb-a89b-80d37e0eb661" 00:07:31.531 ], 00:07:31.531 "product_name": "Malloc disk", 00:07:31.531 "block_size": 512, 00:07:31.531 "num_blocks": 65536, 00:07:31.531 "uuid": "f0b30d1b-0651-4bfb-a89b-80d37e0eb661", 00:07:31.531 "assigned_rate_limits": { 00:07:31.531 "rw_ios_per_sec": 0, 00:07:31.531 "rw_mbytes_per_sec": 0, 00:07:31.531 "r_mbytes_per_sec": 0, 00:07:31.531 "w_mbytes_per_sec": 0 00:07:31.531 }, 00:07:31.531 "claimed": true, 00:07:31.531 "claim_type": "exclusive_write", 00:07:31.531 "zoned": false, 00:07:31.531 "supported_io_types": { 00:07:31.531 "read": true, 00:07:31.531 "write": true, 00:07:31.531 "unmap": true, 00:07:31.531 "flush": true, 00:07:31.531 "reset": true, 00:07:31.531 "nvme_admin": false, 00:07:31.531 "nvme_io": false, 00:07:31.531 "nvme_io_md": false, 00:07:31.531 "write_zeroes": true, 00:07:31.531 "zcopy": true, 00:07:31.531 "get_zone_info": false, 00:07:31.531 "zone_management": false, 00:07:31.531 "zone_append": false, 00:07:31.531 "compare": false, 00:07:31.531 "compare_and_write": false, 00:07:31.531 "abort": true, 00:07:31.531 "seek_hole": false, 00:07:31.531 "seek_data": false, 00:07:31.531 "copy": true, 00:07:31.531 "nvme_iov_md": false 00:07:31.531 }, 00:07:31.531 "memory_domains": [ 00:07:31.531 { 00:07:31.531 "dma_device_id": "system", 00:07:31.531 "dma_device_type": 1 00:07:31.531 }, 00:07:31.531 { 00:07:31.531 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:31.531 "dma_device_type": 2 00:07:31.531 } 00:07:31.531 ], 00:07:31.531 "driver_specific": {} 00:07:31.531 } 00:07:31.531 ] 00:07:31.531 18:38:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.531 18:38:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:31.531 18:38:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:31.531 18:38:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:31.531 18:38:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:31.531 18:38:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:31.531 18:38:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:31.531 18:38:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:31.531 18:38:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:31.531 18:38:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:31.531 18:38:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:31.531 18:38:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:31.531 18:38:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:31.531 18:38:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:31.531 18:38:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.531 18:38:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.531 18:38:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.531 18:38:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:31.531 "name": "Existed_Raid", 00:07:31.531 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:31.531 "strip_size_kb": 64, 00:07:31.531 "state": "configuring", 00:07:31.531 "raid_level": "raid0", 00:07:31.531 "superblock": false, 00:07:31.531 "num_base_bdevs": 3, 00:07:31.531 "num_base_bdevs_discovered": 2, 00:07:31.531 "num_base_bdevs_operational": 3, 00:07:31.531 "base_bdevs_list": [ 00:07:31.531 { 00:07:31.531 "name": "BaseBdev1", 00:07:31.531 "uuid": "f0b30d1b-0651-4bfb-a89b-80d37e0eb661", 00:07:31.531 "is_configured": true, 00:07:31.531 "data_offset": 0, 00:07:31.531 "data_size": 65536 00:07:31.531 }, 00:07:31.531 { 00:07:31.531 "name": null, 00:07:31.531 "uuid": "66d9f892-1ccb-42bf-b345-9e3566e522af", 00:07:31.531 "is_configured": false, 00:07:31.531 "data_offset": 0, 00:07:31.531 "data_size": 65536 00:07:31.531 }, 00:07:31.531 { 00:07:31.531 "name": "BaseBdev3", 00:07:31.531 "uuid": "2e3a294a-a339-407a-ad92-7a34fbe6b305", 00:07:31.531 "is_configured": true, 00:07:31.531 "data_offset": 0, 00:07:31.531 "data_size": 65536 00:07:31.531 } 00:07:31.531 ] 00:07:31.531 }' 00:07:31.531 18:38:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:31.531 18:38:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.100 18:38:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:07:32.100 18:38:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:32.100 18:38:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.100 18:38:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.100 18:38:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.100 18:38:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:07:32.100 18:38:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:07:32.100 18:38:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.100 18:38:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.100 [2024-12-15 18:38:32.314852] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:07:32.100 18:38:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.100 18:38:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:32.100 18:38:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:32.100 18:38:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:32.100 18:38:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:32.100 18:38:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:32.100 18:38:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:32.100 18:38:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:32.100 18:38:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:32.100 18:38:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:32.100 18:38:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:32.100 18:38:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:32.100 18:38:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.100 18:38:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.100 18:38:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:32.100 18:38:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.100 18:38:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:32.100 "name": "Existed_Raid", 00:07:32.100 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:32.100 "strip_size_kb": 64, 00:07:32.100 "state": "configuring", 00:07:32.100 "raid_level": "raid0", 00:07:32.100 "superblock": false, 00:07:32.100 "num_base_bdevs": 3, 00:07:32.100 "num_base_bdevs_discovered": 1, 00:07:32.100 "num_base_bdevs_operational": 3, 00:07:32.100 "base_bdevs_list": [ 00:07:32.100 { 00:07:32.100 "name": "BaseBdev1", 00:07:32.100 "uuid": "f0b30d1b-0651-4bfb-a89b-80d37e0eb661", 00:07:32.100 "is_configured": true, 00:07:32.100 "data_offset": 0, 00:07:32.100 "data_size": 65536 00:07:32.100 }, 00:07:32.100 { 00:07:32.100 "name": null, 00:07:32.100 "uuid": "66d9f892-1ccb-42bf-b345-9e3566e522af", 00:07:32.100 "is_configured": false, 00:07:32.100 "data_offset": 0, 00:07:32.100 "data_size": 65536 00:07:32.100 }, 00:07:32.100 { 00:07:32.100 "name": null, 00:07:32.100 "uuid": "2e3a294a-a339-407a-ad92-7a34fbe6b305", 00:07:32.100 "is_configured": false, 00:07:32.100 "data_offset": 0, 00:07:32.100 "data_size": 65536 00:07:32.100 } 00:07:32.100 ] 00:07:32.100 }' 00:07:32.100 18:38:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:32.100 18:38:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.358 18:38:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:07:32.358 18:38:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:32.358 18:38:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.358 18:38:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.358 18:38:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.617 18:38:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:07:32.617 18:38:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:07:32.617 18:38:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.617 18:38:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.617 [2024-12-15 18:38:32.821961] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:07:32.617 18:38:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.617 18:38:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:32.617 18:38:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:32.617 18:38:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:32.617 18:38:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:32.617 18:38:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:32.617 18:38:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:32.617 18:38:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:32.617 18:38:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:32.617 18:38:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:32.617 18:38:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:32.617 18:38:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:32.617 18:38:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:32.617 18:38:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.617 18:38:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.617 18:38:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.617 18:38:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:32.617 "name": "Existed_Raid", 00:07:32.617 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:32.617 "strip_size_kb": 64, 00:07:32.617 "state": "configuring", 00:07:32.617 "raid_level": "raid0", 00:07:32.617 "superblock": false, 00:07:32.617 "num_base_bdevs": 3, 00:07:32.617 "num_base_bdevs_discovered": 2, 00:07:32.617 "num_base_bdevs_operational": 3, 00:07:32.617 "base_bdevs_list": [ 00:07:32.617 { 00:07:32.617 "name": "BaseBdev1", 00:07:32.617 "uuid": "f0b30d1b-0651-4bfb-a89b-80d37e0eb661", 00:07:32.617 "is_configured": true, 00:07:32.617 "data_offset": 0, 00:07:32.617 "data_size": 65536 00:07:32.617 }, 00:07:32.617 { 00:07:32.617 "name": null, 00:07:32.617 "uuid": "66d9f892-1ccb-42bf-b345-9e3566e522af", 00:07:32.617 "is_configured": false, 00:07:32.617 "data_offset": 0, 00:07:32.617 "data_size": 65536 00:07:32.617 }, 00:07:32.617 { 00:07:32.617 "name": "BaseBdev3", 00:07:32.617 "uuid": "2e3a294a-a339-407a-ad92-7a34fbe6b305", 00:07:32.617 "is_configured": true, 00:07:32.617 "data_offset": 0, 00:07:32.617 "data_size": 65536 00:07:32.617 } 00:07:32.617 ] 00:07:32.617 }' 00:07:32.617 18:38:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:32.617 18:38:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.876 18:38:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:32.876 18:38:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:07:32.876 18:38:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.876 18:38:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.876 18:38:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.876 18:38:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:07:32.876 18:38:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:32.876 18:38:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.876 18:38:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.876 [2024-12-15 18:38:33.277273] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:32.876 18:38:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.876 18:38:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:32.876 18:38:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:32.876 18:38:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:32.876 18:38:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:32.876 18:38:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:32.876 18:38:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:32.876 18:38:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:32.876 18:38:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:32.876 18:38:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:32.876 18:38:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:32.876 18:38:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:32.876 18:38:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:32.876 18:38:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.876 18:38:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.136 18:38:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.136 18:38:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:33.136 "name": "Existed_Raid", 00:07:33.136 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:33.136 "strip_size_kb": 64, 00:07:33.136 "state": "configuring", 00:07:33.136 "raid_level": "raid0", 00:07:33.136 "superblock": false, 00:07:33.136 "num_base_bdevs": 3, 00:07:33.136 "num_base_bdevs_discovered": 1, 00:07:33.136 "num_base_bdevs_operational": 3, 00:07:33.136 "base_bdevs_list": [ 00:07:33.136 { 00:07:33.136 "name": null, 00:07:33.136 "uuid": "f0b30d1b-0651-4bfb-a89b-80d37e0eb661", 00:07:33.136 "is_configured": false, 00:07:33.136 "data_offset": 0, 00:07:33.136 "data_size": 65536 00:07:33.136 }, 00:07:33.136 { 00:07:33.136 "name": null, 00:07:33.136 "uuid": "66d9f892-1ccb-42bf-b345-9e3566e522af", 00:07:33.136 "is_configured": false, 00:07:33.136 "data_offset": 0, 00:07:33.136 "data_size": 65536 00:07:33.136 }, 00:07:33.136 { 00:07:33.136 "name": "BaseBdev3", 00:07:33.136 "uuid": "2e3a294a-a339-407a-ad92-7a34fbe6b305", 00:07:33.136 "is_configured": true, 00:07:33.136 "data_offset": 0, 00:07:33.136 "data_size": 65536 00:07:33.136 } 00:07:33.136 ] 00:07:33.136 }' 00:07:33.136 18:38:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:33.136 18:38:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.396 18:38:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:33.396 18:38:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:07:33.396 18:38:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.396 18:38:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.396 18:38:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.396 18:38:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:07:33.396 18:38:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:07:33.396 18:38:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.396 18:38:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.396 [2024-12-15 18:38:33.760384] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:33.396 18:38:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.396 18:38:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:33.396 18:38:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:33.396 18:38:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:33.396 18:38:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:33.396 18:38:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:33.396 18:38:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:33.396 18:38:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:33.396 18:38:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:33.396 18:38:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:33.396 18:38:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:33.396 18:38:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:33.396 18:38:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.396 18:38:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:33.396 18:38:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.396 18:38:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.396 18:38:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:33.396 "name": "Existed_Raid", 00:07:33.396 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:33.396 "strip_size_kb": 64, 00:07:33.396 "state": "configuring", 00:07:33.396 "raid_level": "raid0", 00:07:33.396 "superblock": false, 00:07:33.396 "num_base_bdevs": 3, 00:07:33.396 "num_base_bdevs_discovered": 2, 00:07:33.396 "num_base_bdevs_operational": 3, 00:07:33.396 "base_bdevs_list": [ 00:07:33.396 { 00:07:33.396 "name": null, 00:07:33.396 "uuid": "f0b30d1b-0651-4bfb-a89b-80d37e0eb661", 00:07:33.396 "is_configured": false, 00:07:33.396 "data_offset": 0, 00:07:33.396 "data_size": 65536 00:07:33.396 }, 00:07:33.396 { 00:07:33.396 "name": "BaseBdev2", 00:07:33.396 "uuid": "66d9f892-1ccb-42bf-b345-9e3566e522af", 00:07:33.396 "is_configured": true, 00:07:33.396 "data_offset": 0, 00:07:33.396 "data_size": 65536 00:07:33.396 }, 00:07:33.396 { 00:07:33.396 "name": "BaseBdev3", 00:07:33.396 "uuid": "2e3a294a-a339-407a-ad92-7a34fbe6b305", 00:07:33.396 "is_configured": true, 00:07:33.396 "data_offset": 0, 00:07:33.396 "data_size": 65536 00:07:33.396 } 00:07:33.396 ] 00:07:33.396 }' 00:07:33.396 18:38:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:33.396 18:38:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.966 18:38:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:07:33.966 18:38:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:33.966 18:38:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.966 18:38:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.966 18:38:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.966 18:38:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:07:33.966 18:38:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:33.966 18:38:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.966 18:38:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.966 18:38:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:07:33.966 18:38:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.966 18:38:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u f0b30d1b-0651-4bfb-a89b-80d37e0eb661 00:07:33.966 18:38:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.966 18:38:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.966 [2024-12-15 18:38:34.352686] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:07:33.966 [2024-12-15 18:38:34.352735] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:07:33.966 [2024-12-15 18:38:34.352745] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:07:33.966 [2024-12-15 18:38:34.353048] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:07:33.966 [2024-12-15 18:38:34.353186] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:07:33.966 [2024-12-15 18:38:34.353201] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:07:33.966 [2024-12-15 18:38:34.353405] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:33.967 NewBaseBdev 00:07:33.967 18:38:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.967 18:38:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:07:33.967 18:38:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:07:33.967 18:38:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:33.967 18:38:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:33.967 18:38:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:33.967 18:38:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:33.967 18:38:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:33.967 18:38:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.967 18:38:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.967 18:38:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.967 18:38:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:07:33.967 18:38:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.967 18:38:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.967 [ 00:07:33.967 { 00:07:33.967 "name": "NewBaseBdev", 00:07:33.967 "aliases": [ 00:07:33.967 "f0b30d1b-0651-4bfb-a89b-80d37e0eb661" 00:07:33.967 ], 00:07:33.967 "product_name": "Malloc disk", 00:07:33.967 "block_size": 512, 00:07:33.967 "num_blocks": 65536, 00:07:33.967 "uuid": "f0b30d1b-0651-4bfb-a89b-80d37e0eb661", 00:07:33.967 "assigned_rate_limits": { 00:07:33.967 "rw_ios_per_sec": 0, 00:07:33.967 "rw_mbytes_per_sec": 0, 00:07:33.967 "r_mbytes_per_sec": 0, 00:07:33.967 "w_mbytes_per_sec": 0 00:07:33.967 }, 00:07:33.967 "claimed": true, 00:07:33.967 "claim_type": "exclusive_write", 00:07:33.967 "zoned": false, 00:07:33.967 "supported_io_types": { 00:07:33.967 "read": true, 00:07:33.967 "write": true, 00:07:33.967 "unmap": true, 00:07:33.967 "flush": true, 00:07:33.967 "reset": true, 00:07:33.967 "nvme_admin": false, 00:07:33.967 "nvme_io": false, 00:07:33.967 "nvme_io_md": false, 00:07:33.967 "write_zeroes": true, 00:07:33.967 "zcopy": true, 00:07:33.967 "get_zone_info": false, 00:07:33.967 "zone_management": false, 00:07:33.967 "zone_append": false, 00:07:33.967 "compare": false, 00:07:33.967 "compare_and_write": false, 00:07:33.967 "abort": true, 00:07:33.967 "seek_hole": false, 00:07:33.967 "seek_data": false, 00:07:33.967 "copy": true, 00:07:33.967 "nvme_iov_md": false 00:07:33.967 }, 00:07:33.967 "memory_domains": [ 00:07:33.967 { 00:07:33.967 "dma_device_id": "system", 00:07:33.967 "dma_device_type": 1 00:07:33.967 }, 00:07:33.967 { 00:07:33.967 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:33.967 "dma_device_type": 2 00:07:33.967 } 00:07:33.967 ], 00:07:33.967 "driver_specific": {} 00:07:33.967 } 00:07:33.967 ] 00:07:33.967 18:38:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.967 18:38:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:33.967 18:38:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:07:33.967 18:38:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:33.967 18:38:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:33.967 18:38:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:33.967 18:38:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:33.967 18:38:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:33.967 18:38:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:33.967 18:38:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:33.967 18:38:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:33.967 18:38:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:33.967 18:38:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:33.967 18:38:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:33.967 18:38:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.967 18:38:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.227 18:38:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.227 18:38:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:34.227 "name": "Existed_Raid", 00:07:34.227 "uuid": "39a5a13a-6084-4132-83f0-a1dc798aa7f5", 00:07:34.227 "strip_size_kb": 64, 00:07:34.227 "state": "online", 00:07:34.227 "raid_level": "raid0", 00:07:34.227 "superblock": false, 00:07:34.227 "num_base_bdevs": 3, 00:07:34.227 "num_base_bdevs_discovered": 3, 00:07:34.227 "num_base_bdevs_operational": 3, 00:07:34.227 "base_bdevs_list": [ 00:07:34.227 { 00:07:34.227 "name": "NewBaseBdev", 00:07:34.227 "uuid": "f0b30d1b-0651-4bfb-a89b-80d37e0eb661", 00:07:34.227 "is_configured": true, 00:07:34.227 "data_offset": 0, 00:07:34.227 "data_size": 65536 00:07:34.227 }, 00:07:34.227 { 00:07:34.227 "name": "BaseBdev2", 00:07:34.227 "uuid": "66d9f892-1ccb-42bf-b345-9e3566e522af", 00:07:34.227 "is_configured": true, 00:07:34.227 "data_offset": 0, 00:07:34.227 "data_size": 65536 00:07:34.227 }, 00:07:34.227 { 00:07:34.227 "name": "BaseBdev3", 00:07:34.227 "uuid": "2e3a294a-a339-407a-ad92-7a34fbe6b305", 00:07:34.227 "is_configured": true, 00:07:34.227 "data_offset": 0, 00:07:34.227 "data_size": 65536 00:07:34.227 } 00:07:34.227 ] 00:07:34.227 }' 00:07:34.227 18:38:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:34.227 18:38:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.487 18:38:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:07:34.487 18:38:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:34.487 18:38:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:34.487 18:38:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:34.487 18:38:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:34.487 18:38:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:34.487 18:38:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:34.487 18:38:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:34.487 18:38:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.487 18:38:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.487 [2024-12-15 18:38:34.836387] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:34.487 18:38:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.487 18:38:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:34.487 "name": "Existed_Raid", 00:07:34.487 "aliases": [ 00:07:34.487 "39a5a13a-6084-4132-83f0-a1dc798aa7f5" 00:07:34.487 ], 00:07:34.487 "product_name": "Raid Volume", 00:07:34.487 "block_size": 512, 00:07:34.487 "num_blocks": 196608, 00:07:34.487 "uuid": "39a5a13a-6084-4132-83f0-a1dc798aa7f5", 00:07:34.487 "assigned_rate_limits": { 00:07:34.487 "rw_ios_per_sec": 0, 00:07:34.487 "rw_mbytes_per_sec": 0, 00:07:34.487 "r_mbytes_per_sec": 0, 00:07:34.487 "w_mbytes_per_sec": 0 00:07:34.487 }, 00:07:34.487 "claimed": false, 00:07:34.487 "zoned": false, 00:07:34.487 "supported_io_types": { 00:07:34.487 "read": true, 00:07:34.487 "write": true, 00:07:34.487 "unmap": true, 00:07:34.487 "flush": true, 00:07:34.487 "reset": true, 00:07:34.487 "nvme_admin": false, 00:07:34.487 "nvme_io": false, 00:07:34.487 "nvme_io_md": false, 00:07:34.487 "write_zeroes": true, 00:07:34.487 "zcopy": false, 00:07:34.487 "get_zone_info": false, 00:07:34.487 "zone_management": false, 00:07:34.487 "zone_append": false, 00:07:34.487 "compare": false, 00:07:34.487 "compare_and_write": false, 00:07:34.487 "abort": false, 00:07:34.487 "seek_hole": false, 00:07:34.487 "seek_data": false, 00:07:34.487 "copy": false, 00:07:34.487 "nvme_iov_md": false 00:07:34.487 }, 00:07:34.487 "memory_domains": [ 00:07:34.487 { 00:07:34.487 "dma_device_id": "system", 00:07:34.487 "dma_device_type": 1 00:07:34.487 }, 00:07:34.487 { 00:07:34.487 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:34.487 "dma_device_type": 2 00:07:34.487 }, 00:07:34.487 { 00:07:34.487 "dma_device_id": "system", 00:07:34.487 "dma_device_type": 1 00:07:34.487 }, 00:07:34.487 { 00:07:34.487 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:34.487 "dma_device_type": 2 00:07:34.487 }, 00:07:34.487 { 00:07:34.487 "dma_device_id": "system", 00:07:34.487 "dma_device_type": 1 00:07:34.487 }, 00:07:34.487 { 00:07:34.487 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:34.487 "dma_device_type": 2 00:07:34.487 } 00:07:34.487 ], 00:07:34.487 "driver_specific": { 00:07:34.487 "raid": { 00:07:34.487 "uuid": "39a5a13a-6084-4132-83f0-a1dc798aa7f5", 00:07:34.487 "strip_size_kb": 64, 00:07:34.487 "state": "online", 00:07:34.487 "raid_level": "raid0", 00:07:34.487 "superblock": false, 00:07:34.487 "num_base_bdevs": 3, 00:07:34.487 "num_base_bdevs_discovered": 3, 00:07:34.487 "num_base_bdevs_operational": 3, 00:07:34.487 "base_bdevs_list": [ 00:07:34.487 { 00:07:34.487 "name": "NewBaseBdev", 00:07:34.487 "uuid": "f0b30d1b-0651-4bfb-a89b-80d37e0eb661", 00:07:34.487 "is_configured": true, 00:07:34.487 "data_offset": 0, 00:07:34.487 "data_size": 65536 00:07:34.487 }, 00:07:34.487 { 00:07:34.487 "name": "BaseBdev2", 00:07:34.487 "uuid": "66d9f892-1ccb-42bf-b345-9e3566e522af", 00:07:34.487 "is_configured": true, 00:07:34.487 "data_offset": 0, 00:07:34.487 "data_size": 65536 00:07:34.487 }, 00:07:34.487 { 00:07:34.487 "name": "BaseBdev3", 00:07:34.487 "uuid": "2e3a294a-a339-407a-ad92-7a34fbe6b305", 00:07:34.487 "is_configured": true, 00:07:34.487 "data_offset": 0, 00:07:34.488 "data_size": 65536 00:07:34.488 } 00:07:34.488 ] 00:07:34.488 } 00:07:34.488 } 00:07:34.488 }' 00:07:34.488 18:38:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:34.488 18:38:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:07:34.488 BaseBdev2 00:07:34.488 BaseBdev3' 00:07:34.748 18:38:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:34.748 18:38:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:34.748 18:38:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:34.748 18:38:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:07:34.748 18:38:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:34.748 18:38:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.748 18:38:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.748 18:38:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.748 18:38:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:34.748 18:38:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:34.748 18:38:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:34.748 18:38:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:34.748 18:38:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.748 18:38:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:34.748 18:38:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.748 18:38:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.748 18:38:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:34.748 18:38:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:34.748 18:38:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:34.748 18:38:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:07:34.748 18:38:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:34.748 18:38:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.748 18:38:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.748 18:38:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.748 18:38:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:34.748 18:38:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:34.748 18:38:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:34.748 18:38:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.748 18:38:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.748 [2024-12-15 18:38:35.107609] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:34.748 [2024-12-15 18:38:35.107732] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:34.748 [2024-12-15 18:38:35.107876] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:34.748 [2024-12-15 18:38:35.107976] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:34.748 [2024-12-15 18:38:35.108028] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:07:34.748 18:38:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.748 18:38:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 76956 00:07:34.748 18:38:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 76956 ']' 00:07:34.748 18:38:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 76956 00:07:34.748 18:38:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:07:34.748 18:38:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:34.748 18:38:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76956 00:07:34.748 18:38:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:34.748 18:38:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:34.748 18:38:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76956' 00:07:34.748 killing process with pid 76956 00:07:34.748 18:38:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 76956 00:07:34.748 [2024-12-15 18:38:35.150096] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:34.748 18:38:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 76956 00:07:35.008 [2024-12-15 18:38:35.211390] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:35.267 18:38:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:07:35.267 00:07:35.267 real 0m9.080s 00:07:35.267 user 0m15.219s 00:07:35.267 sys 0m1.928s 00:07:35.267 18:38:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:35.268 18:38:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.268 ************************************ 00:07:35.268 END TEST raid_state_function_test 00:07:35.268 ************************************ 00:07:35.268 18:38:35 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:07:35.268 18:38:35 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:35.268 18:38:35 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:35.268 18:38:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:35.268 ************************************ 00:07:35.268 START TEST raid_state_function_test_sb 00:07:35.268 ************************************ 00:07:35.268 18:38:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 true 00:07:35.268 18:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:07:35.268 18:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:07:35.268 18:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:07:35.268 18:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:35.268 18:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:35.268 18:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:35.268 18:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:35.268 18:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:35.268 18:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:35.268 18:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:35.268 18:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:35.268 18:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:35.268 18:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:07:35.268 18:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:35.268 18:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:35.268 18:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:07:35.268 18:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:35.268 18:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:35.268 18:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:35.268 18:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:35.268 18:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:35.268 18:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:07:35.268 18:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:35.268 18:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:35.268 18:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:07:35.268 18:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:07:35.268 18:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=77561 00:07:35.268 18:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:35.268 18:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 77561' 00:07:35.268 Process raid pid: 77561 00:07:35.268 18:38:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 77561 00:07:35.268 18:38:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 77561 ']' 00:07:35.268 18:38:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:35.268 18:38:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:35.268 18:38:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:35.268 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:35.268 18:38:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:35.268 18:38:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:35.527 [2024-12-15 18:38:35.708097] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:07:35.527 [2024-12-15 18:38:35.708414] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:35.527 [2024-12-15 18:38:35.880137] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.527 [2024-12-15 18:38:35.920117] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.787 [2024-12-15 18:38:35.996446] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:35.787 [2024-12-15 18:38:35.996565] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:36.380 18:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:36.381 18:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:07:36.381 18:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:07:36.381 18:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.381 18:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:36.381 [2024-12-15 18:38:36.570953] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:36.381 [2024-12-15 18:38:36.571026] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:36.381 [2024-12-15 18:38:36.571037] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:36.381 [2024-12-15 18:38:36.571046] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:36.381 [2024-12-15 18:38:36.571052] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:07:36.381 [2024-12-15 18:38:36.571065] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:07:36.381 18:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.381 18:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:36.381 18:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:36.381 18:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:36.381 18:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:36.381 18:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:36.381 18:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:36.381 18:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:36.381 18:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:36.381 18:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:36.381 18:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:36.381 18:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:36.381 18:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:36.381 18:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.381 18:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:36.381 18:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.381 18:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:36.381 "name": "Existed_Raid", 00:07:36.381 "uuid": "8f2bdeec-7d36-4e7f-bc0d-0941586d8608", 00:07:36.381 "strip_size_kb": 64, 00:07:36.381 "state": "configuring", 00:07:36.381 "raid_level": "raid0", 00:07:36.381 "superblock": true, 00:07:36.381 "num_base_bdevs": 3, 00:07:36.381 "num_base_bdevs_discovered": 0, 00:07:36.381 "num_base_bdevs_operational": 3, 00:07:36.381 "base_bdevs_list": [ 00:07:36.381 { 00:07:36.381 "name": "BaseBdev1", 00:07:36.381 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:36.381 "is_configured": false, 00:07:36.381 "data_offset": 0, 00:07:36.381 "data_size": 0 00:07:36.381 }, 00:07:36.381 { 00:07:36.381 "name": "BaseBdev2", 00:07:36.381 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:36.381 "is_configured": false, 00:07:36.381 "data_offset": 0, 00:07:36.381 "data_size": 0 00:07:36.381 }, 00:07:36.381 { 00:07:36.381 "name": "BaseBdev3", 00:07:36.381 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:36.381 "is_configured": false, 00:07:36.381 "data_offset": 0, 00:07:36.381 "data_size": 0 00:07:36.381 } 00:07:36.381 ] 00:07:36.381 }' 00:07:36.381 18:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:36.381 18:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:36.641 18:38:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:36.641 18:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.641 18:38:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:36.641 [2024-12-15 18:38:37.006079] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:36.641 [2024-12-15 18:38:37.006230] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:07:36.641 18:38:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.641 18:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:07:36.641 18:38:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.641 18:38:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:36.641 [2024-12-15 18:38:37.018068] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:36.641 [2024-12-15 18:38:37.018187] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:36.641 [2024-12-15 18:38:37.018218] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:36.641 [2024-12-15 18:38:37.018240] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:36.641 [2024-12-15 18:38:37.018257] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:07:36.641 [2024-12-15 18:38:37.018277] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:07:36.641 18:38:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.641 18:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:36.641 18:38:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.641 18:38:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:36.641 [2024-12-15 18:38:37.045378] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:36.641 BaseBdev1 00:07:36.641 18:38:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.641 18:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:36.641 18:38:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:36.641 18:38:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:36.641 18:38:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:36.641 18:38:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:36.641 18:38:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:36.641 18:38:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:36.641 18:38:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.641 18:38:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:36.641 18:38:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.641 18:38:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:36.641 18:38:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.641 18:38:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:36.641 [ 00:07:36.641 { 00:07:36.641 "name": "BaseBdev1", 00:07:36.641 "aliases": [ 00:07:36.641 "c567b8be-edfd-46f9-a7ee-3099c61032e5" 00:07:36.641 ], 00:07:36.641 "product_name": "Malloc disk", 00:07:36.641 "block_size": 512, 00:07:36.641 "num_blocks": 65536, 00:07:36.641 "uuid": "c567b8be-edfd-46f9-a7ee-3099c61032e5", 00:07:36.641 "assigned_rate_limits": { 00:07:36.641 "rw_ios_per_sec": 0, 00:07:36.641 "rw_mbytes_per_sec": 0, 00:07:36.641 "r_mbytes_per_sec": 0, 00:07:36.641 "w_mbytes_per_sec": 0 00:07:36.641 }, 00:07:36.641 "claimed": true, 00:07:36.641 "claim_type": "exclusive_write", 00:07:36.641 "zoned": false, 00:07:36.641 "supported_io_types": { 00:07:36.641 "read": true, 00:07:36.641 "write": true, 00:07:36.641 "unmap": true, 00:07:36.641 "flush": true, 00:07:36.641 "reset": true, 00:07:36.641 "nvme_admin": false, 00:07:36.641 "nvme_io": false, 00:07:36.641 "nvme_io_md": false, 00:07:36.901 "write_zeroes": true, 00:07:36.901 "zcopy": true, 00:07:36.901 "get_zone_info": false, 00:07:36.901 "zone_management": false, 00:07:36.901 "zone_append": false, 00:07:36.901 "compare": false, 00:07:36.901 "compare_and_write": false, 00:07:36.901 "abort": true, 00:07:36.901 "seek_hole": false, 00:07:36.901 "seek_data": false, 00:07:36.901 "copy": true, 00:07:36.901 "nvme_iov_md": false 00:07:36.901 }, 00:07:36.901 "memory_domains": [ 00:07:36.901 { 00:07:36.901 "dma_device_id": "system", 00:07:36.901 "dma_device_type": 1 00:07:36.901 }, 00:07:36.901 { 00:07:36.901 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:36.901 "dma_device_type": 2 00:07:36.901 } 00:07:36.901 ], 00:07:36.901 "driver_specific": {} 00:07:36.901 } 00:07:36.901 ] 00:07:36.901 18:38:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.901 18:38:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:36.901 18:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:36.901 18:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:36.901 18:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:36.901 18:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:36.901 18:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:36.901 18:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:36.901 18:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:36.901 18:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:36.901 18:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:36.901 18:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:36.901 18:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:36.901 18:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:36.901 18:38:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.901 18:38:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:36.901 18:38:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.901 18:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:36.901 "name": "Existed_Raid", 00:07:36.901 "uuid": "1beacb73-7f8f-444b-8bc4-21a6f0151a6f", 00:07:36.901 "strip_size_kb": 64, 00:07:36.901 "state": "configuring", 00:07:36.901 "raid_level": "raid0", 00:07:36.901 "superblock": true, 00:07:36.901 "num_base_bdevs": 3, 00:07:36.901 "num_base_bdevs_discovered": 1, 00:07:36.901 "num_base_bdevs_operational": 3, 00:07:36.901 "base_bdevs_list": [ 00:07:36.901 { 00:07:36.901 "name": "BaseBdev1", 00:07:36.901 "uuid": "c567b8be-edfd-46f9-a7ee-3099c61032e5", 00:07:36.901 "is_configured": true, 00:07:36.901 "data_offset": 2048, 00:07:36.901 "data_size": 63488 00:07:36.901 }, 00:07:36.901 { 00:07:36.901 "name": "BaseBdev2", 00:07:36.902 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:36.902 "is_configured": false, 00:07:36.902 "data_offset": 0, 00:07:36.902 "data_size": 0 00:07:36.902 }, 00:07:36.902 { 00:07:36.902 "name": "BaseBdev3", 00:07:36.902 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:36.902 "is_configured": false, 00:07:36.902 "data_offset": 0, 00:07:36.902 "data_size": 0 00:07:36.902 } 00:07:36.902 ] 00:07:36.902 }' 00:07:36.902 18:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:36.902 18:38:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:37.162 18:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:37.162 18:38:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.162 18:38:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:37.162 [2024-12-15 18:38:37.500707] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:37.162 [2024-12-15 18:38:37.500794] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:07:37.162 18:38:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.162 18:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:07:37.162 18:38:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.162 18:38:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:37.162 [2024-12-15 18:38:37.512743] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:37.162 [2024-12-15 18:38:37.515084] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:37.162 [2024-12-15 18:38:37.515181] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:37.162 [2024-12-15 18:38:37.515212] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:07:37.162 [2024-12-15 18:38:37.515236] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:07:37.162 18:38:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.162 18:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:37.162 18:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:37.162 18:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:37.162 18:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:37.162 18:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:37.162 18:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:37.162 18:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:37.162 18:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:37.162 18:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:37.162 18:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:37.162 18:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:37.162 18:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:37.162 18:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:37.162 18:38:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.162 18:38:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:37.162 18:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:37.162 18:38:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.162 18:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:37.162 "name": "Existed_Raid", 00:07:37.162 "uuid": "0ea9bed4-87f8-4cba-b075-ca9e17f7c09a", 00:07:37.162 "strip_size_kb": 64, 00:07:37.162 "state": "configuring", 00:07:37.162 "raid_level": "raid0", 00:07:37.162 "superblock": true, 00:07:37.162 "num_base_bdevs": 3, 00:07:37.162 "num_base_bdevs_discovered": 1, 00:07:37.162 "num_base_bdevs_operational": 3, 00:07:37.162 "base_bdevs_list": [ 00:07:37.162 { 00:07:37.162 "name": "BaseBdev1", 00:07:37.162 "uuid": "c567b8be-edfd-46f9-a7ee-3099c61032e5", 00:07:37.162 "is_configured": true, 00:07:37.162 "data_offset": 2048, 00:07:37.162 "data_size": 63488 00:07:37.162 }, 00:07:37.162 { 00:07:37.162 "name": "BaseBdev2", 00:07:37.162 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:37.162 "is_configured": false, 00:07:37.162 "data_offset": 0, 00:07:37.162 "data_size": 0 00:07:37.162 }, 00:07:37.162 { 00:07:37.162 "name": "BaseBdev3", 00:07:37.162 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:37.162 "is_configured": false, 00:07:37.162 "data_offset": 0, 00:07:37.162 "data_size": 0 00:07:37.162 } 00:07:37.162 ] 00:07:37.162 }' 00:07:37.162 18:38:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:37.162 18:38:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:37.732 18:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:37.732 18:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.732 18:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:37.732 [2024-12-15 18:38:38.020847] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:37.732 BaseBdev2 00:07:37.732 18:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.732 18:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:37.732 18:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:37.732 18:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:37.732 18:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:37.732 18:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:37.732 18:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:37.732 18:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:37.732 18:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.732 18:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:37.732 18:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.732 18:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:37.732 18:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.732 18:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:37.732 [ 00:07:37.732 { 00:07:37.732 "name": "BaseBdev2", 00:07:37.732 "aliases": [ 00:07:37.732 "bbd61a25-1093-454b-8c2d-c8c5ff778c35" 00:07:37.732 ], 00:07:37.732 "product_name": "Malloc disk", 00:07:37.732 "block_size": 512, 00:07:37.732 "num_blocks": 65536, 00:07:37.732 "uuid": "bbd61a25-1093-454b-8c2d-c8c5ff778c35", 00:07:37.732 "assigned_rate_limits": { 00:07:37.732 "rw_ios_per_sec": 0, 00:07:37.732 "rw_mbytes_per_sec": 0, 00:07:37.732 "r_mbytes_per_sec": 0, 00:07:37.732 "w_mbytes_per_sec": 0 00:07:37.732 }, 00:07:37.732 "claimed": true, 00:07:37.732 "claim_type": "exclusive_write", 00:07:37.732 "zoned": false, 00:07:37.732 "supported_io_types": { 00:07:37.732 "read": true, 00:07:37.732 "write": true, 00:07:37.732 "unmap": true, 00:07:37.732 "flush": true, 00:07:37.732 "reset": true, 00:07:37.732 "nvme_admin": false, 00:07:37.732 "nvme_io": false, 00:07:37.732 "nvme_io_md": false, 00:07:37.732 "write_zeroes": true, 00:07:37.732 "zcopy": true, 00:07:37.732 "get_zone_info": false, 00:07:37.732 "zone_management": false, 00:07:37.732 "zone_append": false, 00:07:37.732 "compare": false, 00:07:37.732 "compare_and_write": false, 00:07:37.732 "abort": true, 00:07:37.732 "seek_hole": false, 00:07:37.732 "seek_data": false, 00:07:37.732 "copy": true, 00:07:37.732 "nvme_iov_md": false 00:07:37.732 }, 00:07:37.732 "memory_domains": [ 00:07:37.732 { 00:07:37.732 "dma_device_id": "system", 00:07:37.732 "dma_device_type": 1 00:07:37.732 }, 00:07:37.732 { 00:07:37.732 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:37.732 "dma_device_type": 2 00:07:37.732 } 00:07:37.732 ], 00:07:37.732 "driver_specific": {} 00:07:37.732 } 00:07:37.732 ] 00:07:37.732 18:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.732 18:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:37.732 18:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:37.732 18:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:37.732 18:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:37.732 18:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:37.732 18:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:37.732 18:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:37.732 18:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:37.732 18:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:37.732 18:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:37.732 18:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:37.732 18:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:37.732 18:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:37.732 18:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:37.732 18:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:37.732 18:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.732 18:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:37.732 18:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.732 18:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:37.732 "name": "Existed_Raid", 00:07:37.732 "uuid": "0ea9bed4-87f8-4cba-b075-ca9e17f7c09a", 00:07:37.732 "strip_size_kb": 64, 00:07:37.732 "state": "configuring", 00:07:37.732 "raid_level": "raid0", 00:07:37.732 "superblock": true, 00:07:37.732 "num_base_bdevs": 3, 00:07:37.732 "num_base_bdevs_discovered": 2, 00:07:37.732 "num_base_bdevs_operational": 3, 00:07:37.732 "base_bdevs_list": [ 00:07:37.732 { 00:07:37.732 "name": "BaseBdev1", 00:07:37.732 "uuid": "c567b8be-edfd-46f9-a7ee-3099c61032e5", 00:07:37.732 "is_configured": true, 00:07:37.732 "data_offset": 2048, 00:07:37.732 "data_size": 63488 00:07:37.732 }, 00:07:37.732 { 00:07:37.732 "name": "BaseBdev2", 00:07:37.732 "uuid": "bbd61a25-1093-454b-8c2d-c8c5ff778c35", 00:07:37.732 "is_configured": true, 00:07:37.732 "data_offset": 2048, 00:07:37.732 "data_size": 63488 00:07:37.732 }, 00:07:37.732 { 00:07:37.732 "name": "BaseBdev3", 00:07:37.732 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:37.732 "is_configured": false, 00:07:37.732 "data_offset": 0, 00:07:37.732 "data_size": 0 00:07:37.732 } 00:07:37.732 ] 00:07:37.732 }' 00:07:37.732 18:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:37.732 18:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:38.303 18:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:07:38.303 18:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.303 18:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:38.303 [2024-12-15 18:38:38.493227] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:07:38.303 [2024-12-15 18:38:38.493551] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:07:38.303 [2024-12-15 18:38:38.493609] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:07:38.303 [2024-12-15 18:38:38.493992] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:07:38.303 BaseBdev3 00:07:38.303 [2024-12-15 18:38:38.494231] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:07:38.303 [2024-12-15 18:38:38.494253] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:07:38.303 [2024-12-15 18:38:38.494427] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:38.303 18:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.303 18:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:07:38.303 18:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:07:38.303 18:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:38.303 18:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:38.303 18:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:38.303 18:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:38.303 18:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:38.303 18:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.303 18:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:38.303 18:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.303 18:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:07:38.303 18:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.303 18:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:38.303 [ 00:07:38.303 { 00:07:38.303 "name": "BaseBdev3", 00:07:38.303 "aliases": [ 00:07:38.303 "c0dc71fa-4f8b-4532-a963-3ec29e4964b7" 00:07:38.303 ], 00:07:38.303 "product_name": "Malloc disk", 00:07:38.303 "block_size": 512, 00:07:38.303 "num_blocks": 65536, 00:07:38.303 "uuid": "c0dc71fa-4f8b-4532-a963-3ec29e4964b7", 00:07:38.303 "assigned_rate_limits": { 00:07:38.303 "rw_ios_per_sec": 0, 00:07:38.303 "rw_mbytes_per_sec": 0, 00:07:38.303 "r_mbytes_per_sec": 0, 00:07:38.303 "w_mbytes_per_sec": 0 00:07:38.303 }, 00:07:38.303 "claimed": true, 00:07:38.303 "claim_type": "exclusive_write", 00:07:38.303 "zoned": false, 00:07:38.303 "supported_io_types": { 00:07:38.303 "read": true, 00:07:38.303 "write": true, 00:07:38.303 "unmap": true, 00:07:38.303 "flush": true, 00:07:38.303 "reset": true, 00:07:38.303 "nvme_admin": false, 00:07:38.303 "nvme_io": false, 00:07:38.303 "nvme_io_md": false, 00:07:38.303 "write_zeroes": true, 00:07:38.303 "zcopy": true, 00:07:38.303 "get_zone_info": false, 00:07:38.303 "zone_management": false, 00:07:38.303 "zone_append": false, 00:07:38.303 "compare": false, 00:07:38.303 "compare_and_write": false, 00:07:38.303 "abort": true, 00:07:38.303 "seek_hole": false, 00:07:38.303 "seek_data": false, 00:07:38.303 "copy": true, 00:07:38.303 "nvme_iov_md": false 00:07:38.303 }, 00:07:38.303 "memory_domains": [ 00:07:38.303 { 00:07:38.303 "dma_device_id": "system", 00:07:38.303 "dma_device_type": 1 00:07:38.303 }, 00:07:38.303 { 00:07:38.303 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:38.303 "dma_device_type": 2 00:07:38.303 } 00:07:38.303 ], 00:07:38.303 "driver_specific": {} 00:07:38.303 } 00:07:38.303 ] 00:07:38.303 18:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.303 18:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:38.303 18:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:38.303 18:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:38.303 18:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:07:38.303 18:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:38.303 18:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:38.303 18:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:38.303 18:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:38.303 18:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:38.303 18:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:38.303 18:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:38.303 18:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:38.303 18:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:38.303 18:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:38.303 18:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:38.303 18:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.303 18:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:38.303 18:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.303 18:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:38.303 "name": "Existed_Raid", 00:07:38.303 "uuid": "0ea9bed4-87f8-4cba-b075-ca9e17f7c09a", 00:07:38.303 "strip_size_kb": 64, 00:07:38.303 "state": "online", 00:07:38.303 "raid_level": "raid0", 00:07:38.303 "superblock": true, 00:07:38.303 "num_base_bdevs": 3, 00:07:38.303 "num_base_bdevs_discovered": 3, 00:07:38.303 "num_base_bdevs_operational": 3, 00:07:38.303 "base_bdevs_list": [ 00:07:38.303 { 00:07:38.303 "name": "BaseBdev1", 00:07:38.303 "uuid": "c567b8be-edfd-46f9-a7ee-3099c61032e5", 00:07:38.303 "is_configured": true, 00:07:38.303 "data_offset": 2048, 00:07:38.303 "data_size": 63488 00:07:38.303 }, 00:07:38.303 { 00:07:38.303 "name": "BaseBdev2", 00:07:38.303 "uuid": "bbd61a25-1093-454b-8c2d-c8c5ff778c35", 00:07:38.303 "is_configured": true, 00:07:38.303 "data_offset": 2048, 00:07:38.303 "data_size": 63488 00:07:38.303 }, 00:07:38.303 { 00:07:38.303 "name": "BaseBdev3", 00:07:38.303 "uuid": "c0dc71fa-4f8b-4532-a963-3ec29e4964b7", 00:07:38.303 "is_configured": true, 00:07:38.303 "data_offset": 2048, 00:07:38.303 "data_size": 63488 00:07:38.303 } 00:07:38.303 ] 00:07:38.303 }' 00:07:38.303 18:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:38.303 18:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:38.563 18:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:38.563 18:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:38.563 18:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:38.563 18:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:38.563 18:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:38.563 18:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:38.563 18:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:38.563 18:38:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:38.563 18:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.563 18:38:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:38.563 [2024-12-15 18:38:38.996757] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:38.823 18:38:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.823 18:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:38.823 "name": "Existed_Raid", 00:07:38.823 "aliases": [ 00:07:38.823 "0ea9bed4-87f8-4cba-b075-ca9e17f7c09a" 00:07:38.823 ], 00:07:38.823 "product_name": "Raid Volume", 00:07:38.823 "block_size": 512, 00:07:38.823 "num_blocks": 190464, 00:07:38.823 "uuid": "0ea9bed4-87f8-4cba-b075-ca9e17f7c09a", 00:07:38.823 "assigned_rate_limits": { 00:07:38.823 "rw_ios_per_sec": 0, 00:07:38.823 "rw_mbytes_per_sec": 0, 00:07:38.823 "r_mbytes_per_sec": 0, 00:07:38.823 "w_mbytes_per_sec": 0 00:07:38.823 }, 00:07:38.823 "claimed": false, 00:07:38.823 "zoned": false, 00:07:38.823 "supported_io_types": { 00:07:38.823 "read": true, 00:07:38.823 "write": true, 00:07:38.823 "unmap": true, 00:07:38.823 "flush": true, 00:07:38.823 "reset": true, 00:07:38.823 "nvme_admin": false, 00:07:38.823 "nvme_io": false, 00:07:38.823 "nvme_io_md": false, 00:07:38.823 "write_zeroes": true, 00:07:38.823 "zcopy": false, 00:07:38.823 "get_zone_info": false, 00:07:38.823 "zone_management": false, 00:07:38.823 "zone_append": false, 00:07:38.823 "compare": false, 00:07:38.823 "compare_and_write": false, 00:07:38.823 "abort": false, 00:07:38.823 "seek_hole": false, 00:07:38.823 "seek_data": false, 00:07:38.823 "copy": false, 00:07:38.823 "nvme_iov_md": false 00:07:38.823 }, 00:07:38.823 "memory_domains": [ 00:07:38.823 { 00:07:38.823 "dma_device_id": "system", 00:07:38.823 "dma_device_type": 1 00:07:38.823 }, 00:07:38.823 { 00:07:38.823 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:38.823 "dma_device_type": 2 00:07:38.823 }, 00:07:38.823 { 00:07:38.823 "dma_device_id": "system", 00:07:38.823 "dma_device_type": 1 00:07:38.823 }, 00:07:38.823 { 00:07:38.823 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:38.823 "dma_device_type": 2 00:07:38.823 }, 00:07:38.823 { 00:07:38.823 "dma_device_id": "system", 00:07:38.823 "dma_device_type": 1 00:07:38.823 }, 00:07:38.823 { 00:07:38.823 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:38.823 "dma_device_type": 2 00:07:38.823 } 00:07:38.823 ], 00:07:38.823 "driver_specific": { 00:07:38.823 "raid": { 00:07:38.823 "uuid": "0ea9bed4-87f8-4cba-b075-ca9e17f7c09a", 00:07:38.823 "strip_size_kb": 64, 00:07:38.823 "state": "online", 00:07:38.823 "raid_level": "raid0", 00:07:38.823 "superblock": true, 00:07:38.823 "num_base_bdevs": 3, 00:07:38.823 "num_base_bdevs_discovered": 3, 00:07:38.823 "num_base_bdevs_operational": 3, 00:07:38.823 "base_bdevs_list": [ 00:07:38.823 { 00:07:38.823 "name": "BaseBdev1", 00:07:38.823 "uuid": "c567b8be-edfd-46f9-a7ee-3099c61032e5", 00:07:38.823 "is_configured": true, 00:07:38.823 "data_offset": 2048, 00:07:38.823 "data_size": 63488 00:07:38.823 }, 00:07:38.823 { 00:07:38.823 "name": "BaseBdev2", 00:07:38.823 "uuid": "bbd61a25-1093-454b-8c2d-c8c5ff778c35", 00:07:38.823 "is_configured": true, 00:07:38.824 "data_offset": 2048, 00:07:38.824 "data_size": 63488 00:07:38.824 }, 00:07:38.824 { 00:07:38.824 "name": "BaseBdev3", 00:07:38.824 "uuid": "c0dc71fa-4f8b-4532-a963-3ec29e4964b7", 00:07:38.824 "is_configured": true, 00:07:38.824 "data_offset": 2048, 00:07:38.824 "data_size": 63488 00:07:38.824 } 00:07:38.824 ] 00:07:38.824 } 00:07:38.824 } 00:07:38.824 }' 00:07:38.824 18:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:38.824 18:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:38.824 BaseBdev2 00:07:38.824 BaseBdev3' 00:07:38.824 18:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:38.824 18:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:38.824 18:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:38.824 18:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:38.824 18:38:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.824 18:38:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:38.824 18:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:38.824 18:38:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.824 18:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:38.824 18:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:38.824 18:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:38.824 18:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:38.824 18:38:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.824 18:38:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:38.824 18:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:38.824 18:38:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.824 18:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:38.824 18:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:38.824 18:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:38.824 18:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:07:38.824 18:38:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.824 18:38:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:38.824 18:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:38.824 18:38:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.084 18:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:39.084 18:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:39.084 18:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:39.084 18:38:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.084 18:38:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:39.084 [2024-12-15 18:38:39.280186] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:39.084 [2024-12-15 18:38:39.280231] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:39.084 [2024-12-15 18:38:39.280308] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:39.084 18:38:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.084 18:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:39.084 18:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:07:39.084 18:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:39.084 18:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:07:39.084 18:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:39.084 18:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:07:39.084 18:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:39.084 18:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:39.084 18:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:39.084 18:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:39.084 18:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:39.084 18:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:39.084 18:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:39.084 18:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:39.084 18:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:39.084 18:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:39.084 18:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:39.084 18:38:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.084 18:38:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:39.084 18:38:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.084 18:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:39.084 "name": "Existed_Raid", 00:07:39.084 "uuid": "0ea9bed4-87f8-4cba-b075-ca9e17f7c09a", 00:07:39.084 "strip_size_kb": 64, 00:07:39.084 "state": "offline", 00:07:39.084 "raid_level": "raid0", 00:07:39.084 "superblock": true, 00:07:39.084 "num_base_bdevs": 3, 00:07:39.084 "num_base_bdevs_discovered": 2, 00:07:39.084 "num_base_bdevs_operational": 2, 00:07:39.084 "base_bdevs_list": [ 00:07:39.084 { 00:07:39.084 "name": null, 00:07:39.084 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:39.084 "is_configured": false, 00:07:39.084 "data_offset": 0, 00:07:39.084 "data_size": 63488 00:07:39.084 }, 00:07:39.084 { 00:07:39.084 "name": "BaseBdev2", 00:07:39.084 "uuid": "bbd61a25-1093-454b-8c2d-c8c5ff778c35", 00:07:39.084 "is_configured": true, 00:07:39.084 "data_offset": 2048, 00:07:39.084 "data_size": 63488 00:07:39.084 }, 00:07:39.084 { 00:07:39.084 "name": "BaseBdev3", 00:07:39.084 "uuid": "c0dc71fa-4f8b-4532-a963-3ec29e4964b7", 00:07:39.084 "is_configured": true, 00:07:39.084 "data_offset": 2048, 00:07:39.084 "data_size": 63488 00:07:39.084 } 00:07:39.084 ] 00:07:39.084 }' 00:07:39.084 18:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:39.084 18:38:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:39.344 18:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:39.344 18:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:39.344 18:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:39.344 18:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:39.344 18:38:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.344 18:38:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:39.344 18:38:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.344 18:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:39.344 18:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:39.344 18:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:39.344 18:38:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.344 18:38:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:39.344 [2024-12-15 18:38:39.756479] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:39.344 18:38:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.344 18:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:39.344 18:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:39.605 18:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:39.605 18:38:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.605 18:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:39.605 18:38:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:39.605 18:38:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.605 18:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:39.605 18:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:39.605 18:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:07:39.605 18:38:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.605 18:38:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:39.605 [2024-12-15 18:38:39.853152] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:07:39.605 [2024-12-15 18:38:39.853314] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:07:39.605 18:38:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.605 18:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:39.605 18:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:39.605 18:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:39.605 18:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:39.605 18:38:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.605 18:38:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:39.605 18:38:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.605 18:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:39.605 18:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:39.605 18:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:07:39.605 18:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:07:39.605 18:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:07:39.605 18:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:39.605 18:38:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.605 18:38:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:39.605 BaseBdev2 00:07:39.605 18:38:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.605 18:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:07:39.605 18:38:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:39.605 18:38:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:39.605 18:38:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:39.605 18:38:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:39.605 18:38:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:39.605 18:38:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:39.605 18:38:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.605 18:38:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:39.605 18:38:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.605 18:38:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:39.605 18:38:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.605 18:38:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:39.605 [ 00:07:39.605 { 00:07:39.605 "name": "BaseBdev2", 00:07:39.605 "aliases": [ 00:07:39.605 "c9ccf61b-73cb-4730-9550-cb101822f5e2" 00:07:39.605 ], 00:07:39.605 "product_name": "Malloc disk", 00:07:39.605 "block_size": 512, 00:07:39.605 "num_blocks": 65536, 00:07:39.605 "uuid": "c9ccf61b-73cb-4730-9550-cb101822f5e2", 00:07:39.605 "assigned_rate_limits": { 00:07:39.605 "rw_ios_per_sec": 0, 00:07:39.605 "rw_mbytes_per_sec": 0, 00:07:39.605 "r_mbytes_per_sec": 0, 00:07:39.605 "w_mbytes_per_sec": 0 00:07:39.605 }, 00:07:39.605 "claimed": false, 00:07:39.605 "zoned": false, 00:07:39.605 "supported_io_types": { 00:07:39.605 "read": true, 00:07:39.605 "write": true, 00:07:39.605 "unmap": true, 00:07:39.605 "flush": true, 00:07:39.605 "reset": true, 00:07:39.605 "nvme_admin": false, 00:07:39.605 "nvme_io": false, 00:07:39.605 "nvme_io_md": false, 00:07:39.605 "write_zeroes": true, 00:07:39.605 "zcopy": true, 00:07:39.605 "get_zone_info": false, 00:07:39.605 "zone_management": false, 00:07:39.605 "zone_append": false, 00:07:39.605 "compare": false, 00:07:39.605 "compare_and_write": false, 00:07:39.605 "abort": true, 00:07:39.605 "seek_hole": false, 00:07:39.605 "seek_data": false, 00:07:39.605 "copy": true, 00:07:39.605 "nvme_iov_md": false 00:07:39.605 }, 00:07:39.605 "memory_domains": [ 00:07:39.605 { 00:07:39.605 "dma_device_id": "system", 00:07:39.605 "dma_device_type": 1 00:07:39.605 }, 00:07:39.605 { 00:07:39.605 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:39.605 "dma_device_type": 2 00:07:39.605 } 00:07:39.605 ], 00:07:39.605 "driver_specific": {} 00:07:39.605 } 00:07:39.605 ] 00:07:39.605 18:38:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.605 18:38:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:39.605 18:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:07:39.605 18:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:07:39.605 18:38:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:07:39.605 18:38:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.605 18:38:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:39.605 BaseBdev3 00:07:39.605 18:38:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.605 18:38:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:07:39.605 18:38:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:07:39.606 18:38:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:39.606 18:38:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:39.606 18:38:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:39.606 18:38:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:39.606 18:38:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:39.606 18:38:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.606 18:38:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:39.606 18:38:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.606 18:38:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:07:39.606 18:38:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.606 18:38:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:39.606 [ 00:07:39.606 { 00:07:39.606 "name": "BaseBdev3", 00:07:39.606 "aliases": [ 00:07:39.606 "67c6cbf5-688d-449a-ba86-1dfea37e676f" 00:07:39.606 ], 00:07:39.606 "product_name": "Malloc disk", 00:07:39.606 "block_size": 512, 00:07:39.606 "num_blocks": 65536, 00:07:39.869 "uuid": "67c6cbf5-688d-449a-ba86-1dfea37e676f", 00:07:39.869 "assigned_rate_limits": { 00:07:39.869 "rw_ios_per_sec": 0, 00:07:39.869 "rw_mbytes_per_sec": 0, 00:07:39.869 "r_mbytes_per_sec": 0, 00:07:39.869 "w_mbytes_per_sec": 0 00:07:39.869 }, 00:07:39.869 "claimed": false, 00:07:39.869 "zoned": false, 00:07:39.869 "supported_io_types": { 00:07:39.869 "read": true, 00:07:39.869 "write": true, 00:07:39.869 "unmap": true, 00:07:39.869 "flush": true, 00:07:39.869 "reset": true, 00:07:39.869 "nvme_admin": false, 00:07:39.869 "nvme_io": false, 00:07:39.869 "nvme_io_md": false, 00:07:39.869 "write_zeroes": true, 00:07:39.869 "zcopy": true, 00:07:39.869 "get_zone_info": false, 00:07:39.869 "zone_management": false, 00:07:39.869 "zone_append": false, 00:07:39.869 "compare": false, 00:07:39.869 "compare_and_write": false, 00:07:39.869 "abort": true, 00:07:39.869 "seek_hole": false, 00:07:39.869 "seek_data": false, 00:07:39.869 "copy": true, 00:07:39.869 "nvme_iov_md": false 00:07:39.869 }, 00:07:39.869 "memory_domains": [ 00:07:39.869 { 00:07:39.869 "dma_device_id": "system", 00:07:39.869 "dma_device_type": 1 00:07:39.869 }, 00:07:39.869 { 00:07:39.869 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:39.869 "dma_device_type": 2 00:07:39.869 } 00:07:39.869 ], 00:07:39.869 "driver_specific": {} 00:07:39.869 } 00:07:39.869 ] 00:07:39.869 18:38:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.869 18:38:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:39.869 18:38:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:07:39.869 18:38:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:07:39.869 18:38:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:07:39.869 18:38:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.869 18:38:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:39.869 [2024-12-15 18:38:40.061097] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:39.869 [2024-12-15 18:38:40.061307] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:39.869 [2024-12-15 18:38:40.061367] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:39.869 [2024-12-15 18:38:40.063569] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:07:39.869 18:38:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.869 18:38:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:39.869 18:38:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:39.869 18:38:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:39.869 18:38:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:39.869 18:38:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:39.869 18:38:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:39.869 18:38:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:39.869 18:38:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:39.869 18:38:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:39.869 18:38:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:39.869 18:38:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:39.869 18:38:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:39.869 18:38:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.869 18:38:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:39.869 18:38:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.869 18:38:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:39.869 "name": "Existed_Raid", 00:07:39.869 "uuid": "c4839004-7a6d-4441-82ba-d9dc56d3f29f", 00:07:39.869 "strip_size_kb": 64, 00:07:39.869 "state": "configuring", 00:07:39.869 "raid_level": "raid0", 00:07:39.869 "superblock": true, 00:07:39.869 "num_base_bdevs": 3, 00:07:39.869 "num_base_bdevs_discovered": 2, 00:07:39.869 "num_base_bdevs_operational": 3, 00:07:39.869 "base_bdevs_list": [ 00:07:39.869 { 00:07:39.869 "name": "BaseBdev1", 00:07:39.869 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:39.869 "is_configured": false, 00:07:39.869 "data_offset": 0, 00:07:39.869 "data_size": 0 00:07:39.869 }, 00:07:39.869 { 00:07:39.869 "name": "BaseBdev2", 00:07:39.869 "uuid": "c9ccf61b-73cb-4730-9550-cb101822f5e2", 00:07:39.869 "is_configured": true, 00:07:39.869 "data_offset": 2048, 00:07:39.869 "data_size": 63488 00:07:39.869 }, 00:07:39.869 { 00:07:39.869 "name": "BaseBdev3", 00:07:39.869 "uuid": "67c6cbf5-688d-449a-ba86-1dfea37e676f", 00:07:39.869 "is_configured": true, 00:07:39.869 "data_offset": 2048, 00:07:39.869 "data_size": 63488 00:07:39.869 } 00:07:39.869 ] 00:07:39.869 }' 00:07:39.869 18:38:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:39.869 18:38:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:40.128 18:38:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:07:40.128 18:38:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.128 18:38:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:40.128 [2024-12-15 18:38:40.528463] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:40.128 18:38:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.128 18:38:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:40.128 18:38:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:40.128 18:38:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:40.128 18:38:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:40.128 18:38:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:40.128 18:38:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:40.128 18:38:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:40.128 18:38:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:40.128 18:38:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:40.128 18:38:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:40.128 18:38:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:40.128 18:38:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:40.128 18:38:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.128 18:38:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:40.128 18:38:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.388 18:38:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:40.388 "name": "Existed_Raid", 00:07:40.388 "uuid": "c4839004-7a6d-4441-82ba-d9dc56d3f29f", 00:07:40.388 "strip_size_kb": 64, 00:07:40.388 "state": "configuring", 00:07:40.388 "raid_level": "raid0", 00:07:40.388 "superblock": true, 00:07:40.388 "num_base_bdevs": 3, 00:07:40.388 "num_base_bdevs_discovered": 1, 00:07:40.388 "num_base_bdevs_operational": 3, 00:07:40.388 "base_bdevs_list": [ 00:07:40.388 { 00:07:40.388 "name": "BaseBdev1", 00:07:40.388 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:40.388 "is_configured": false, 00:07:40.388 "data_offset": 0, 00:07:40.388 "data_size": 0 00:07:40.388 }, 00:07:40.388 { 00:07:40.388 "name": null, 00:07:40.388 "uuid": "c9ccf61b-73cb-4730-9550-cb101822f5e2", 00:07:40.388 "is_configured": false, 00:07:40.388 "data_offset": 0, 00:07:40.388 "data_size": 63488 00:07:40.388 }, 00:07:40.388 { 00:07:40.388 "name": "BaseBdev3", 00:07:40.388 "uuid": "67c6cbf5-688d-449a-ba86-1dfea37e676f", 00:07:40.388 "is_configured": true, 00:07:40.388 "data_offset": 2048, 00:07:40.388 "data_size": 63488 00:07:40.388 } 00:07:40.388 ] 00:07:40.388 }' 00:07:40.388 18:38:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:40.388 18:38:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:40.648 18:38:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:07:40.648 18:38:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:40.648 18:38:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.648 18:38:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:40.648 18:38:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.648 18:38:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:07:40.648 18:38:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:40.648 18:38:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.648 18:38:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:40.648 [2024-12-15 18:38:41.084632] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:40.908 BaseBdev1 00:07:40.908 18:38:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.908 18:38:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:07:40.908 18:38:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:40.908 18:38:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:40.908 18:38:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:40.908 18:38:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:40.908 18:38:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:40.908 18:38:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:40.908 18:38:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.908 18:38:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:40.908 18:38:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.908 18:38:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:40.908 18:38:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.908 18:38:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:40.908 [ 00:07:40.908 { 00:07:40.908 "name": "BaseBdev1", 00:07:40.908 "aliases": [ 00:07:40.908 "4594d445-0bb6-4154-85bc-64b69969d3a9" 00:07:40.908 ], 00:07:40.908 "product_name": "Malloc disk", 00:07:40.908 "block_size": 512, 00:07:40.908 "num_blocks": 65536, 00:07:40.908 "uuid": "4594d445-0bb6-4154-85bc-64b69969d3a9", 00:07:40.908 "assigned_rate_limits": { 00:07:40.908 "rw_ios_per_sec": 0, 00:07:40.908 "rw_mbytes_per_sec": 0, 00:07:40.908 "r_mbytes_per_sec": 0, 00:07:40.908 "w_mbytes_per_sec": 0 00:07:40.909 }, 00:07:40.909 "claimed": true, 00:07:40.909 "claim_type": "exclusive_write", 00:07:40.909 "zoned": false, 00:07:40.909 "supported_io_types": { 00:07:40.909 "read": true, 00:07:40.909 "write": true, 00:07:40.909 "unmap": true, 00:07:40.909 "flush": true, 00:07:40.909 "reset": true, 00:07:40.909 "nvme_admin": false, 00:07:40.909 "nvme_io": false, 00:07:40.909 "nvme_io_md": false, 00:07:40.909 "write_zeroes": true, 00:07:40.909 "zcopy": true, 00:07:40.909 "get_zone_info": false, 00:07:40.909 "zone_management": false, 00:07:40.909 "zone_append": false, 00:07:40.909 "compare": false, 00:07:40.909 "compare_and_write": false, 00:07:40.909 "abort": true, 00:07:40.909 "seek_hole": false, 00:07:40.909 "seek_data": false, 00:07:40.909 "copy": true, 00:07:40.909 "nvme_iov_md": false 00:07:40.909 }, 00:07:40.909 "memory_domains": [ 00:07:40.909 { 00:07:40.909 "dma_device_id": "system", 00:07:40.909 "dma_device_type": 1 00:07:40.909 }, 00:07:40.909 { 00:07:40.909 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:40.909 "dma_device_type": 2 00:07:40.909 } 00:07:40.909 ], 00:07:40.909 "driver_specific": {} 00:07:40.909 } 00:07:40.909 ] 00:07:40.909 18:38:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.909 18:38:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:40.909 18:38:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:40.909 18:38:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:40.909 18:38:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:40.909 18:38:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:40.909 18:38:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:40.909 18:38:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:40.909 18:38:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:40.909 18:38:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:40.909 18:38:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:40.909 18:38:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:40.909 18:38:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:40.909 18:38:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:40.909 18:38:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.909 18:38:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:40.909 18:38:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.909 18:38:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:40.909 "name": "Existed_Raid", 00:07:40.909 "uuid": "c4839004-7a6d-4441-82ba-d9dc56d3f29f", 00:07:40.909 "strip_size_kb": 64, 00:07:40.909 "state": "configuring", 00:07:40.909 "raid_level": "raid0", 00:07:40.909 "superblock": true, 00:07:40.909 "num_base_bdevs": 3, 00:07:40.909 "num_base_bdevs_discovered": 2, 00:07:40.909 "num_base_bdevs_operational": 3, 00:07:40.909 "base_bdevs_list": [ 00:07:40.909 { 00:07:40.909 "name": "BaseBdev1", 00:07:40.909 "uuid": "4594d445-0bb6-4154-85bc-64b69969d3a9", 00:07:40.909 "is_configured": true, 00:07:40.909 "data_offset": 2048, 00:07:40.909 "data_size": 63488 00:07:40.909 }, 00:07:40.909 { 00:07:40.909 "name": null, 00:07:40.909 "uuid": "c9ccf61b-73cb-4730-9550-cb101822f5e2", 00:07:40.909 "is_configured": false, 00:07:40.909 "data_offset": 0, 00:07:40.909 "data_size": 63488 00:07:40.909 }, 00:07:40.909 { 00:07:40.909 "name": "BaseBdev3", 00:07:40.909 "uuid": "67c6cbf5-688d-449a-ba86-1dfea37e676f", 00:07:40.909 "is_configured": true, 00:07:40.909 "data_offset": 2048, 00:07:40.909 "data_size": 63488 00:07:40.909 } 00:07:40.909 ] 00:07:40.909 }' 00:07:40.909 18:38:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:40.909 18:38:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:41.169 18:38:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:07:41.169 18:38:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:41.169 18:38:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.169 18:38:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:41.169 18:38:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.169 18:38:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:07:41.169 18:38:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:07:41.169 18:38:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.169 18:38:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:41.169 [2024-12-15 18:38:41.608140] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:07:41.429 18:38:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.429 18:38:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:41.429 18:38:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:41.429 18:38:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:41.429 18:38:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:41.429 18:38:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:41.429 18:38:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:41.429 18:38:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:41.429 18:38:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:41.429 18:38:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:41.429 18:38:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:41.429 18:38:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:41.429 18:38:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:41.429 18:38:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.429 18:38:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:41.429 18:38:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.429 18:38:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:41.429 "name": "Existed_Raid", 00:07:41.429 "uuid": "c4839004-7a6d-4441-82ba-d9dc56d3f29f", 00:07:41.429 "strip_size_kb": 64, 00:07:41.429 "state": "configuring", 00:07:41.429 "raid_level": "raid0", 00:07:41.429 "superblock": true, 00:07:41.429 "num_base_bdevs": 3, 00:07:41.429 "num_base_bdevs_discovered": 1, 00:07:41.429 "num_base_bdevs_operational": 3, 00:07:41.429 "base_bdevs_list": [ 00:07:41.429 { 00:07:41.429 "name": "BaseBdev1", 00:07:41.429 "uuid": "4594d445-0bb6-4154-85bc-64b69969d3a9", 00:07:41.429 "is_configured": true, 00:07:41.429 "data_offset": 2048, 00:07:41.429 "data_size": 63488 00:07:41.429 }, 00:07:41.429 { 00:07:41.429 "name": null, 00:07:41.429 "uuid": "c9ccf61b-73cb-4730-9550-cb101822f5e2", 00:07:41.429 "is_configured": false, 00:07:41.429 "data_offset": 0, 00:07:41.429 "data_size": 63488 00:07:41.429 }, 00:07:41.429 { 00:07:41.429 "name": null, 00:07:41.429 "uuid": "67c6cbf5-688d-449a-ba86-1dfea37e676f", 00:07:41.429 "is_configured": false, 00:07:41.429 "data_offset": 0, 00:07:41.429 "data_size": 63488 00:07:41.429 } 00:07:41.429 ] 00:07:41.429 }' 00:07:41.429 18:38:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:41.429 18:38:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:41.689 18:38:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:41.689 18:38:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.689 18:38:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:41.689 18:38:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:07:41.689 18:38:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.689 18:38:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:07:41.689 18:38:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:07:41.689 18:38:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.689 18:38:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:41.689 [2024-12-15 18:38:42.115286] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:07:41.689 18:38:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.689 18:38:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:41.689 18:38:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:41.689 18:38:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:41.689 18:38:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:41.689 18:38:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:41.689 18:38:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:41.689 18:38:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:41.689 18:38:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:41.689 18:38:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:41.689 18:38:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:41.689 18:38:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:41.689 18:38:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:41.689 18:38:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.689 18:38:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:41.949 18:38:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.949 18:38:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:41.949 "name": "Existed_Raid", 00:07:41.949 "uuid": "c4839004-7a6d-4441-82ba-d9dc56d3f29f", 00:07:41.949 "strip_size_kb": 64, 00:07:41.949 "state": "configuring", 00:07:41.949 "raid_level": "raid0", 00:07:41.949 "superblock": true, 00:07:41.949 "num_base_bdevs": 3, 00:07:41.949 "num_base_bdevs_discovered": 2, 00:07:41.949 "num_base_bdevs_operational": 3, 00:07:41.949 "base_bdevs_list": [ 00:07:41.949 { 00:07:41.949 "name": "BaseBdev1", 00:07:41.949 "uuid": "4594d445-0bb6-4154-85bc-64b69969d3a9", 00:07:41.949 "is_configured": true, 00:07:41.949 "data_offset": 2048, 00:07:41.949 "data_size": 63488 00:07:41.949 }, 00:07:41.949 { 00:07:41.949 "name": null, 00:07:41.949 "uuid": "c9ccf61b-73cb-4730-9550-cb101822f5e2", 00:07:41.949 "is_configured": false, 00:07:41.949 "data_offset": 0, 00:07:41.949 "data_size": 63488 00:07:41.949 }, 00:07:41.949 { 00:07:41.949 "name": "BaseBdev3", 00:07:41.949 "uuid": "67c6cbf5-688d-449a-ba86-1dfea37e676f", 00:07:41.949 "is_configured": true, 00:07:41.949 "data_offset": 2048, 00:07:41.949 "data_size": 63488 00:07:41.949 } 00:07:41.949 ] 00:07:41.949 }' 00:07:41.949 18:38:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:41.949 18:38:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:42.209 18:38:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:42.209 18:38:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:07:42.209 18:38:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.209 18:38:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:42.209 18:38:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.209 18:38:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:07:42.209 18:38:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:42.209 18:38:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.209 18:38:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:42.209 [2024-12-15 18:38:42.578575] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:42.209 18:38:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.210 18:38:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:42.210 18:38:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:42.210 18:38:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:42.210 18:38:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:42.210 18:38:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:42.210 18:38:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:42.210 18:38:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:42.210 18:38:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:42.210 18:38:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:42.210 18:38:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:42.210 18:38:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:42.210 18:38:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:42.210 18:38:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.210 18:38:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:42.210 18:38:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.470 18:38:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:42.470 "name": "Existed_Raid", 00:07:42.470 "uuid": "c4839004-7a6d-4441-82ba-d9dc56d3f29f", 00:07:42.470 "strip_size_kb": 64, 00:07:42.470 "state": "configuring", 00:07:42.470 "raid_level": "raid0", 00:07:42.470 "superblock": true, 00:07:42.470 "num_base_bdevs": 3, 00:07:42.470 "num_base_bdevs_discovered": 1, 00:07:42.470 "num_base_bdevs_operational": 3, 00:07:42.470 "base_bdevs_list": [ 00:07:42.470 { 00:07:42.470 "name": null, 00:07:42.470 "uuid": "4594d445-0bb6-4154-85bc-64b69969d3a9", 00:07:42.470 "is_configured": false, 00:07:42.470 "data_offset": 0, 00:07:42.470 "data_size": 63488 00:07:42.470 }, 00:07:42.470 { 00:07:42.470 "name": null, 00:07:42.470 "uuid": "c9ccf61b-73cb-4730-9550-cb101822f5e2", 00:07:42.470 "is_configured": false, 00:07:42.470 "data_offset": 0, 00:07:42.470 "data_size": 63488 00:07:42.470 }, 00:07:42.470 { 00:07:42.470 "name": "BaseBdev3", 00:07:42.470 "uuid": "67c6cbf5-688d-449a-ba86-1dfea37e676f", 00:07:42.470 "is_configured": true, 00:07:42.470 "data_offset": 2048, 00:07:42.470 "data_size": 63488 00:07:42.470 } 00:07:42.470 ] 00:07:42.470 }' 00:07:42.470 18:38:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:42.470 18:38:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:42.730 18:38:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:42.730 18:38:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:07:42.730 18:38:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.730 18:38:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:42.730 18:38:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.730 18:38:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:07:42.730 18:38:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:07:42.730 18:38:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.730 18:38:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:42.730 [2024-12-15 18:38:43.125857] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:42.730 18:38:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.730 18:38:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:42.730 18:38:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:42.730 18:38:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:42.730 18:38:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:42.730 18:38:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:42.730 18:38:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:42.730 18:38:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:42.730 18:38:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:42.730 18:38:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:42.730 18:38:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:42.730 18:38:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:42.730 18:38:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.730 18:38:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:42.730 18:38:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:42.730 18:38:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.990 18:38:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:42.990 "name": "Existed_Raid", 00:07:42.990 "uuid": "c4839004-7a6d-4441-82ba-d9dc56d3f29f", 00:07:42.990 "strip_size_kb": 64, 00:07:42.990 "state": "configuring", 00:07:42.990 "raid_level": "raid0", 00:07:42.990 "superblock": true, 00:07:42.990 "num_base_bdevs": 3, 00:07:42.990 "num_base_bdevs_discovered": 2, 00:07:42.990 "num_base_bdevs_operational": 3, 00:07:42.990 "base_bdevs_list": [ 00:07:42.990 { 00:07:42.990 "name": null, 00:07:42.990 "uuid": "4594d445-0bb6-4154-85bc-64b69969d3a9", 00:07:42.990 "is_configured": false, 00:07:42.990 "data_offset": 0, 00:07:42.990 "data_size": 63488 00:07:42.990 }, 00:07:42.990 { 00:07:42.990 "name": "BaseBdev2", 00:07:42.990 "uuid": "c9ccf61b-73cb-4730-9550-cb101822f5e2", 00:07:42.990 "is_configured": true, 00:07:42.990 "data_offset": 2048, 00:07:42.990 "data_size": 63488 00:07:42.990 }, 00:07:42.990 { 00:07:42.990 "name": "BaseBdev3", 00:07:42.990 "uuid": "67c6cbf5-688d-449a-ba86-1dfea37e676f", 00:07:42.990 "is_configured": true, 00:07:42.990 "data_offset": 2048, 00:07:42.990 "data_size": 63488 00:07:42.990 } 00:07:42.990 ] 00:07:42.990 }' 00:07:42.990 18:38:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:42.990 18:38:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:43.253 18:38:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:43.253 18:38:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.253 18:38:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:43.253 18:38:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:07:43.253 18:38:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.253 18:38:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:07:43.253 18:38:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:07:43.253 18:38:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:43.253 18:38:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.253 18:38:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:43.253 18:38:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.253 18:38:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 4594d445-0bb6-4154-85bc-64b69969d3a9 00:07:43.253 18:38:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.253 18:38:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:43.253 [2024-12-15 18:38:43.649856] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:07:43.253 [2024-12-15 18:38:43.650067] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:07:43.253 [2024-12-15 18:38:43.650085] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:07:43.253 [2024-12-15 18:38:43.650367] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:07:43.253 NewBaseBdev 00:07:43.253 [2024-12-15 18:38:43.650503] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:07:43.253 [2024-12-15 18:38:43.650512] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:07:43.253 [2024-12-15 18:38:43.650630] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:43.253 18:38:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.253 18:38:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:07:43.254 18:38:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:07:43.254 18:38:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:43.254 18:38:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:43.254 18:38:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:43.254 18:38:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:43.254 18:38:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:43.254 18:38:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.254 18:38:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:43.254 18:38:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.254 18:38:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:07:43.254 18:38:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.254 18:38:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:43.254 [ 00:07:43.254 { 00:07:43.254 "name": "NewBaseBdev", 00:07:43.254 "aliases": [ 00:07:43.254 "4594d445-0bb6-4154-85bc-64b69969d3a9" 00:07:43.254 ], 00:07:43.254 "product_name": "Malloc disk", 00:07:43.254 "block_size": 512, 00:07:43.254 "num_blocks": 65536, 00:07:43.254 "uuid": "4594d445-0bb6-4154-85bc-64b69969d3a9", 00:07:43.254 "assigned_rate_limits": { 00:07:43.254 "rw_ios_per_sec": 0, 00:07:43.254 "rw_mbytes_per_sec": 0, 00:07:43.254 "r_mbytes_per_sec": 0, 00:07:43.254 "w_mbytes_per_sec": 0 00:07:43.254 }, 00:07:43.254 "claimed": true, 00:07:43.254 "claim_type": "exclusive_write", 00:07:43.254 "zoned": false, 00:07:43.254 "supported_io_types": { 00:07:43.254 "read": true, 00:07:43.254 "write": true, 00:07:43.254 "unmap": true, 00:07:43.254 "flush": true, 00:07:43.254 "reset": true, 00:07:43.254 "nvme_admin": false, 00:07:43.254 "nvme_io": false, 00:07:43.254 "nvme_io_md": false, 00:07:43.254 "write_zeroes": true, 00:07:43.254 "zcopy": true, 00:07:43.254 "get_zone_info": false, 00:07:43.254 "zone_management": false, 00:07:43.254 "zone_append": false, 00:07:43.254 "compare": false, 00:07:43.254 "compare_and_write": false, 00:07:43.254 "abort": true, 00:07:43.254 "seek_hole": false, 00:07:43.254 "seek_data": false, 00:07:43.254 "copy": true, 00:07:43.254 "nvme_iov_md": false 00:07:43.254 }, 00:07:43.254 "memory_domains": [ 00:07:43.254 { 00:07:43.254 "dma_device_id": "system", 00:07:43.254 "dma_device_type": 1 00:07:43.254 }, 00:07:43.254 { 00:07:43.254 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:43.254 "dma_device_type": 2 00:07:43.254 } 00:07:43.254 ], 00:07:43.254 "driver_specific": {} 00:07:43.254 } 00:07:43.254 ] 00:07:43.254 18:38:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.254 18:38:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:43.254 18:38:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:07:43.254 18:38:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:43.254 18:38:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:43.254 18:38:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:43.254 18:38:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:43.254 18:38:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:43.254 18:38:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:43.254 18:38:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:43.519 18:38:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:43.519 18:38:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:43.519 18:38:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:43.519 18:38:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.519 18:38:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:43.519 18:38:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:43.519 18:38:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.519 18:38:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:43.519 "name": "Existed_Raid", 00:07:43.519 "uuid": "c4839004-7a6d-4441-82ba-d9dc56d3f29f", 00:07:43.519 "strip_size_kb": 64, 00:07:43.519 "state": "online", 00:07:43.519 "raid_level": "raid0", 00:07:43.519 "superblock": true, 00:07:43.519 "num_base_bdevs": 3, 00:07:43.519 "num_base_bdevs_discovered": 3, 00:07:43.519 "num_base_bdevs_operational": 3, 00:07:43.519 "base_bdevs_list": [ 00:07:43.519 { 00:07:43.519 "name": "NewBaseBdev", 00:07:43.519 "uuid": "4594d445-0bb6-4154-85bc-64b69969d3a9", 00:07:43.519 "is_configured": true, 00:07:43.519 "data_offset": 2048, 00:07:43.519 "data_size": 63488 00:07:43.519 }, 00:07:43.519 { 00:07:43.519 "name": "BaseBdev2", 00:07:43.519 "uuid": "c9ccf61b-73cb-4730-9550-cb101822f5e2", 00:07:43.519 "is_configured": true, 00:07:43.519 "data_offset": 2048, 00:07:43.519 "data_size": 63488 00:07:43.519 }, 00:07:43.519 { 00:07:43.519 "name": "BaseBdev3", 00:07:43.519 "uuid": "67c6cbf5-688d-449a-ba86-1dfea37e676f", 00:07:43.519 "is_configured": true, 00:07:43.519 "data_offset": 2048, 00:07:43.519 "data_size": 63488 00:07:43.519 } 00:07:43.519 ] 00:07:43.519 }' 00:07:43.519 18:38:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:43.519 18:38:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:43.779 18:38:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:07:43.779 18:38:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:43.779 18:38:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:43.779 18:38:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:43.779 18:38:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:43.779 18:38:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:43.779 18:38:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:43.779 18:38:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:43.779 18:38:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.779 18:38:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:43.779 [2024-12-15 18:38:44.181331] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:43.779 18:38:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.039 18:38:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:44.039 "name": "Existed_Raid", 00:07:44.039 "aliases": [ 00:07:44.039 "c4839004-7a6d-4441-82ba-d9dc56d3f29f" 00:07:44.039 ], 00:07:44.039 "product_name": "Raid Volume", 00:07:44.039 "block_size": 512, 00:07:44.039 "num_blocks": 190464, 00:07:44.040 "uuid": "c4839004-7a6d-4441-82ba-d9dc56d3f29f", 00:07:44.040 "assigned_rate_limits": { 00:07:44.040 "rw_ios_per_sec": 0, 00:07:44.040 "rw_mbytes_per_sec": 0, 00:07:44.040 "r_mbytes_per_sec": 0, 00:07:44.040 "w_mbytes_per_sec": 0 00:07:44.040 }, 00:07:44.040 "claimed": false, 00:07:44.040 "zoned": false, 00:07:44.040 "supported_io_types": { 00:07:44.040 "read": true, 00:07:44.040 "write": true, 00:07:44.040 "unmap": true, 00:07:44.040 "flush": true, 00:07:44.040 "reset": true, 00:07:44.040 "nvme_admin": false, 00:07:44.040 "nvme_io": false, 00:07:44.040 "nvme_io_md": false, 00:07:44.040 "write_zeroes": true, 00:07:44.040 "zcopy": false, 00:07:44.040 "get_zone_info": false, 00:07:44.040 "zone_management": false, 00:07:44.040 "zone_append": false, 00:07:44.040 "compare": false, 00:07:44.040 "compare_and_write": false, 00:07:44.040 "abort": false, 00:07:44.040 "seek_hole": false, 00:07:44.040 "seek_data": false, 00:07:44.040 "copy": false, 00:07:44.040 "nvme_iov_md": false 00:07:44.040 }, 00:07:44.040 "memory_domains": [ 00:07:44.040 { 00:07:44.040 "dma_device_id": "system", 00:07:44.040 "dma_device_type": 1 00:07:44.040 }, 00:07:44.040 { 00:07:44.040 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:44.040 "dma_device_type": 2 00:07:44.040 }, 00:07:44.040 { 00:07:44.040 "dma_device_id": "system", 00:07:44.040 "dma_device_type": 1 00:07:44.040 }, 00:07:44.040 { 00:07:44.040 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:44.040 "dma_device_type": 2 00:07:44.040 }, 00:07:44.040 { 00:07:44.040 "dma_device_id": "system", 00:07:44.040 "dma_device_type": 1 00:07:44.040 }, 00:07:44.040 { 00:07:44.040 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:44.040 "dma_device_type": 2 00:07:44.040 } 00:07:44.040 ], 00:07:44.040 "driver_specific": { 00:07:44.040 "raid": { 00:07:44.040 "uuid": "c4839004-7a6d-4441-82ba-d9dc56d3f29f", 00:07:44.040 "strip_size_kb": 64, 00:07:44.040 "state": "online", 00:07:44.040 "raid_level": "raid0", 00:07:44.040 "superblock": true, 00:07:44.040 "num_base_bdevs": 3, 00:07:44.040 "num_base_bdevs_discovered": 3, 00:07:44.040 "num_base_bdevs_operational": 3, 00:07:44.040 "base_bdevs_list": [ 00:07:44.040 { 00:07:44.040 "name": "NewBaseBdev", 00:07:44.040 "uuid": "4594d445-0bb6-4154-85bc-64b69969d3a9", 00:07:44.040 "is_configured": true, 00:07:44.040 "data_offset": 2048, 00:07:44.040 "data_size": 63488 00:07:44.040 }, 00:07:44.040 { 00:07:44.040 "name": "BaseBdev2", 00:07:44.040 "uuid": "c9ccf61b-73cb-4730-9550-cb101822f5e2", 00:07:44.040 "is_configured": true, 00:07:44.040 "data_offset": 2048, 00:07:44.040 "data_size": 63488 00:07:44.040 }, 00:07:44.040 { 00:07:44.040 "name": "BaseBdev3", 00:07:44.040 "uuid": "67c6cbf5-688d-449a-ba86-1dfea37e676f", 00:07:44.040 "is_configured": true, 00:07:44.040 "data_offset": 2048, 00:07:44.040 "data_size": 63488 00:07:44.040 } 00:07:44.040 ] 00:07:44.040 } 00:07:44.040 } 00:07:44.040 }' 00:07:44.040 18:38:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:44.040 18:38:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:07:44.040 BaseBdev2 00:07:44.040 BaseBdev3' 00:07:44.040 18:38:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:44.040 18:38:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:44.040 18:38:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:44.040 18:38:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:44.040 18:38:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:07:44.040 18:38:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.040 18:38:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:44.040 18:38:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.040 18:38:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:44.040 18:38:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:44.040 18:38:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:44.040 18:38:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:44.040 18:38:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.040 18:38:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:44.040 18:38:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:44.040 18:38:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.040 18:38:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:44.040 18:38:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:44.040 18:38:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:44.040 18:38:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:07:44.040 18:38:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:44.040 18:38:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.040 18:38:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:44.040 18:38:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.040 18:38:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:44.040 18:38:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:44.040 18:38:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:44.040 18:38:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.040 18:38:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:44.300 [2024-12-15 18:38:44.480469] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:44.300 [2024-12-15 18:38:44.480597] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:44.300 [2024-12-15 18:38:44.480713] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:44.300 [2024-12-15 18:38:44.480781] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:44.300 [2024-12-15 18:38:44.480826] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:07:44.300 18:38:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.300 18:38:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 77561 00:07:44.300 18:38:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 77561 ']' 00:07:44.300 18:38:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 77561 00:07:44.300 18:38:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:07:44.300 18:38:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:44.300 18:38:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77561 00:07:44.300 18:38:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:44.300 18:38:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:44.300 killing process with pid 77561 00:07:44.300 18:38:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77561' 00:07:44.300 18:38:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 77561 00:07:44.300 [2024-12-15 18:38:44.524733] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:44.300 18:38:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 77561 00:07:44.300 [2024-12-15 18:38:44.586196] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:44.560 18:38:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:07:44.560 00:07:44.560 real 0m9.297s 00:07:44.560 user 0m15.628s 00:07:44.560 sys 0m1.936s 00:07:44.560 18:38:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:44.560 18:38:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:44.560 ************************************ 00:07:44.560 END TEST raid_state_function_test_sb 00:07:44.560 ************************************ 00:07:44.560 18:38:44 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:07:44.560 18:38:44 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:44.560 18:38:44 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:44.560 18:38:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:44.560 ************************************ 00:07:44.560 START TEST raid_superblock_test 00:07:44.560 ************************************ 00:07:44.560 18:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 3 00:07:44.560 18:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:07:44.560 18:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:07:44.560 18:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:07:44.560 18:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:07:44.560 18:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:07:44.560 18:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:07:44.560 18:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:07:44.560 18:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:07:44.560 18:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:07:44.560 18:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:07:44.560 18:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:07:44.560 18:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:07:44.560 18:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:07:44.560 18:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:07:44.560 18:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:07:44.560 18:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:07:44.560 18:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=78170 00:07:44.560 18:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:07:44.560 18:38:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 78170 00:07:44.560 18:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 78170 ']' 00:07:44.819 18:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:44.819 18:38:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:44.819 18:38:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:44.819 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:44.819 18:38:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:44.819 18:38:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.819 [2024-12-15 18:38:45.090927] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:07:44.819 [2024-12-15 18:38:45.091198] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78170 ] 00:07:45.078 [2024-12-15 18:38:45.264786] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.078 [2024-12-15 18:38:45.308578] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.078 [2024-12-15 18:38:45.385217] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:45.078 [2024-12-15 18:38:45.385265] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:45.648 18:38:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:45.648 18:38:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:07:45.648 18:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:07:45.648 18:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:45.648 18:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:07:45.648 18:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:07:45.648 18:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:45.648 18:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:45.648 18:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:45.648 18:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:45.648 18:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:07:45.648 18:38:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.648 18:38:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.648 malloc1 00:07:45.648 18:38:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.648 18:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:45.648 18:38:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.648 18:38:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.648 [2024-12-15 18:38:45.955697] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:45.648 [2024-12-15 18:38:45.955880] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:45.648 [2024-12-15 18:38:45.955925] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:45.648 [2024-12-15 18:38:45.955968] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:45.648 [2024-12-15 18:38:45.958591] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:45.648 [2024-12-15 18:38:45.958676] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:45.648 pt1 00:07:45.648 18:38:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.648 18:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:45.648 18:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:45.648 18:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:07:45.648 18:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:07:45.648 18:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:45.648 18:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:45.648 18:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:45.648 18:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:45.648 18:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:07:45.648 18:38:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.648 18:38:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.648 malloc2 00:07:45.648 18:38:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.648 18:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:45.648 18:38:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.648 18:38:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.648 [2024-12-15 18:38:45.995280] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:45.649 [2024-12-15 18:38:45.995453] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:45.649 [2024-12-15 18:38:45.995478] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:07:45.649 [2024-12-15 18:38:45.995490] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:45.649 [2024-12-15 18:38:45.998177] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:45.649 [2024-12-15 18:38:45.998222] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:45.649 pt2 00:07:45.649 18:38:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.649 18:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:45.649 18:38:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:45.649 18:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:07:45.649 18:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:07:45.649 18:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:07:45.649 18:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:45.649 18:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:45.649 18:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:45.649 18:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:07:45.649 18:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.649 18:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.649 malloc3 00:07:45.649 18:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.649 18:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:07:45.649 18:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.649 18:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.649 [2024-12-15 18:38:46.030852] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:07:45.649 [2024-12-15 18:38:46.031021] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:45.649 [2024-12-15 18:38:46.031068] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:07:45.649 [2024-12-15 18:38:46.031112] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:45.649 [2024-12-15 18:38:46.033616] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:45.649 [2024-12-15 18:38:46.033697] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:07:45.649 pt3 00:07:45.649 18:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.649 18:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:45.649 18:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:45.649 18:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:07:45.649 18:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.649 18:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.649 [2024-12-15 18:38:46.042932] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:45.649 [2024-12-15 18:38:46.045296] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:45.649 [2024-12-15 18:38:46.045366] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:07:45.649 [2024-12-15 18:38:46.045542] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:07:45.649 [2024-12-15 18:38:46.045553] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:07:45.649 [2024-12-15 18:38:46.045918] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:07:45.649 [2024-12-15 18:38:46.046092] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:07:45.649 [2024-12-15 18:38:46.046113] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:07:45.649 [2024-12-15 18:38:46.046295] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:45.649 18:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.649 18:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:07:45.649 18:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:45.649 18:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:45.649 18:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:45.649 18:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:45.649 18:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:45.649 18:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:45.649 18:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:45.649 18:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:45.649 18:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:45.649 18:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:45.649 18:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:45.649 18:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.649 18:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.649 18:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.909 18:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:45.909 "name": "raid_bdev1", 00:07:45.909 "uuid": "40e9272d-6b9e-4a48-a3b4-8d124642a435", 00:07:45.909 "strip_size_kb": 64, 00:07:45.909 "state": "online", 00:07:45.909 "raid_level": "raid0", 00:07:45.909 "superblock": true, 00:07:45.909 "num_base_bdevs": 3, 00:07:45.909 "num_base_bdevs_discovered": 3, 00:07:45.909 "num_base_bdevs_operational": 3, 00:07:45.909 "base_bdevs_list": [ 00:07:45.909 { 00:07:45.909 "name": "pt1", 00:07:45.909 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:45.909 "is_configured": true, 00:07:45.909 "data_offset": 2048, 00:07:45.909 "data_size": 63488 00:07:45.909 }, 00:07:45.909 { 00:07:45.909 "name": "pt2", 00:07:45.909 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:45.909 "is_configured": true, 00:07:45.909 "data_offset": 2048, 00:07:45.909 "data_size": 63488 00:07:45.909 }, 00:07:45.909 { 00:07:45.909 "name": "pt3", 00:07:45.909 "uuid": "00000000-0000-0000-0000-000000000003", 00:07:45.909 "is_configured": true, 00:07:45.909 "data_offset": 2048, 00:07:45.909 "data_size": 63488 00:07:45.909 } 00:07:45.909 ] 00:07:45.909 }' 00:07:45.909 18:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:45.909 18:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.168 18:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:07:46.168 18:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:46.168 18:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:46.168 18:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:46.168 18:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:46.168 18:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:46.168 18:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:46.168 18:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:46.168 18:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.168 18:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.168 [2024-12-15 18:38:46.498397] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:46.168 18:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.168 18:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:46.168 "name": "raid_bdev1", 00:07:46.168 "aliases": [ 00:07:46.168 "40e9272d-6b9e-4a48-a3b4-8d124642a435" 00:07:46.168 ], 00:07:46.168 "product_name": "Raid Volume", 00:07:46.168 "block_size": 512, 00:07:46.168 "num_blocks": 190464, 00:07:46.168 "uuid": "40e9272d-6b9e-4a48-a3b4-8d124642a435", 00:07:46.168 "assigned_rate_limits": { 00:07:46.168 "rw_ios_per_sec": 0, 00:07:46.168 "rw_mbytes_per_sec": 0, 00:07:46.168 "r_mbytes_per_sec": 0, 00:07:46.168 "w_mbytes_per_sec": 0 00:07:46.168 }, 00:07:46.168 "claimed": false, 00:07:46.168 "zoned": false, 00:07:46.168 "supported_io_types": { 00:07:46.168 "read": true, 00:07:46.168 "write": true, 00:07:46.168 "unmap": true, 00:07:46.168 "flush": true, 00:07:46.168 "reset": true, 00:07:46.168 "nvme_admin": false, 00:07:46.168 "nvme_io": false, 00:07:46.168 "nvme_io_md": false, 00:07:46.168 "write_zeroes": true, 00:07:46.168 "zcopy": false, 00:07:46.168 "get_zone_info": false, 00:07:46.168 "zone_management": false, 00:07:46.168 "zone_append": false, 00:07:46.168 "compare": false, 00:07:46.168 "compare_and_write": false, 00:07:46.168 "abort": false, 00:07:46.168 "seek_hole": false, 00:07:46.168 "seek_data": false, 00:07:46.168 "copy": false, 00:07:46.168 "nvme_iov_md": false 00:07:46.168 }, 00:07:46.168 "memory_domains": [ 00:07:46.168 { 00:07:46.168 "dma_device_id": "system", 00:07:46.168 "dma_device_type": 1 00:07:46.168 }, 00:07:46.168 { 00:07:46.168 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:46.168 "dma_device_type": 2 00:07:46.168 }, 00:07:46.168 { 00:07:46.168 "dma_device_id": "system", 00:07:46.168 "dma_device_type": 1 00:07:46.168 }, 00:07:46.168 { 00:07:46.168 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:46.168 "dma_device_type": 2 00:07:46.168 }, 00:07:46.168 { 00:07:46.168 "dma_device_id": "system", 00:07:46.168 "dma_device_type": 1 00:07:46.168 }, 00:07:46.168 { 00:07:46.168 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:46.168 "dma_device_type": 2 00:07:46.168 } 00:07:46.168 ], 00:07:46.168 "driver_specific": { 00:07:46.168 "raid": { 00:07:46.168 "uuid": "40e9272d-6b9e-4a48-a3b4-8d124642a435", 00:07:46.168 "strip_size_kb": 64, 00:07:46.168 "state": "online", 00:07:46.168 "raid_level": "raid0", 00:07:46.168 "superblock": true, 00:07:46.168 "num_base_bdevs": 3, 00:07:46.168 "num_base_bdevs_discovered": 3, 00:07:46.168 "num_base_bdevs_operational": 3, 00:07:46.168 "base_bdevs_list": [ 00:07:46.168 { 00:07:46.168 "name": "pt1", 00:07:46.168 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:46.168 "is_configured": true, 00:07:46.168 "data_offset": 2048, 00:07:46.168 "data_size": 63488 00:07:46.168 }, 00:07:46.168 { 00:07:46.168 "name": "pt2", 00:07:46.168 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:46.168 "is_configured": true, 00:07:46.168 "data_offset": 2048, 00:07:46.168 "data_size": 63488 00:07:46.168 }, 00:07:46.168 { 00:07:46.168 "name": "pt3", 00:07:46.168 "uuid": "00000000-0000-0000-0000-000000000003", 00:07:46.168 "is_configured": true, 00:07:46.168 "data_offset": 2048, 00:07:46.168 "data_size": 63488 00:07:46.168 } 00:07:46.168 ] 00:07:46.168 } 00:07:46.168 } 00:07:46.168 }' 00:07:46.168 18:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:46.168 18:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:46.168 pt2 00:07:46.168 pt3' 00:07:46.168 18:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:46.427 18:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:46.427 18:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:46.427 18:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:46.427 18:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:46.427 18:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.427 18:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.427 18:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.427 18:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:46.427 18:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:46.427 18:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:46.427 18:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:46.427 18:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:46.427 18:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.427 18:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.427 18:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.427 18:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:46.427 18:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:46.427 18:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:46.427 18:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:07:46.427 18:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:46.427 18:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.427 18:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.427 18:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.427 18:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:46.427 18:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:46.427 18:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:46.427 18:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.427 18:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.427 18:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:07:46.427 [2024-12-15 18:38:46.793904] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:46.427 18:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.427 18:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=40e9272d-6b9e-4a48-a3b4-8d124642a435 00:07:46.427 18:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 40e9272d-6b9e-4a48-a3b4-8d124642a435 ']' 00:07:46.427 18:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:46.427 18:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.427 18:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.427 [2024-12-15 18:38:46.845488] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:46.427 [2024-12-15 18:38:46.845541] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:46.427 [2024-12-15 18:38:46.845668] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:46.427 [2024-12-15 18:38:46.845740] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:46.427 [2024-12-15 18:38:46.845754] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:07:46.427 18:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.427 18:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:46.427 18:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:07:46.427 18:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.427 18:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.427 18:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.689 18:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:07:46.689 18:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:07:46.689 18:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:46.689 18:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:07:46.689 18:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.689 18:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.689 18:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.689 18:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:46.689 18:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:07:46.689 18:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.689 18:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.689 18:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.689 18:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:46.689 18:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:07:46.689 18:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.689 18:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.689 18:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.689 18:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:07:46.689 18:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.689 18:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:46.689 18:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.689 18:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.689 18:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:07:46.689 18:38:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:07:46.689 18:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:07:46.689 18:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:07:46.689 18:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:46.689 18:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:46.689 18:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:46.689 18:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:46.689 18:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:07:46.689 18:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.689 18:38:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.689 [2024-12-15 18:38:46.993265] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:46.689 [2024-12-15 18:38:46.995609] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:46.689 [2024-12-15 18:38:46.995664] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:07:46.689 [2024-12-15 18:38:46.995726] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:46.689 [2024-12-15 18:38:46.995790] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:46.689 [2024-12-15 18:38:46.995826] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:07:46.689 [2024-12-15 18:38:46.995841] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:46.689 [2024-12-15 18:38:46.995854] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:07:46.689 request: 00:07:46.689 { 00:07:46.689 "name": "raid_bdev1", 00:07:46.689 "raid_level": "raid0", 00:07:46.689 "base_bdevs": [ 00:07:46.689 "malloc1", 00:07:46.689 "malloc2", 00:07:46.689 "malloc3" 00:07:46.689 ], 00:07:46.689 "strip_size_kb": 64, 00:07:46.689 "superblock": false, 00:07:46.689 "method": "bdev_raid_create", 00:07:46.689 "req_id": 1 00:07:46.689 } 00:07:46.689 Got JSON-RPC error response 00:07:46.689 response: 00:07:46.689 { 00:07:46.689 "code": -17, 00:07:46.689 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:46.689 } 00:07:46.689 18:38:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:46.689 18:38:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:07:46.689 18:38:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:46.689 18:38:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:46.689 18:38:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:46.689 18:38:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:46.689 18:38:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:07:46.689 18:38:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.689 18:38:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.689 18:38:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.689 18:38:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:07:46.689 18:38:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:07:46.689 18:38:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:46.689 18:38:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.689 18:38:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.689 [2024-12-15 18:38:47.057117] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:46.689 [2024-12-15 18:38:47.057298] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:46.689 [2024-12-15 18:38:47.057347] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:07:46.689 [2024-12-15 18:38:47.057384] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:46.689 [2024-12-15 18:38:47.060044] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:46.689 [2024-12-15 18:38:47.060127] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:46.689 [2024-12-15 18:38:47.060287] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:46.689 [2024-12-15 18:38:47.060364] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:46.689 pt1 00:07:46.689 18:38:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.689 18:38:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:07:46.689 18:38:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:46.689 18:38:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:46.689 18:38:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:46.689 18:38:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:46.689 18:38:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:46.689 18:38:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:46.689 18:38:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:46.689 18:38:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:46.689 18:38:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:46.689 18:38:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:46.689 18:38:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:46.689 18:38:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.689 18:38:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.689 18:38:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.689 18:38:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:46.689 "name": "raid_bdev1", 00:07:46.689 "uuid": "40e9272d-6b9e-4a48-a3b4-8d124642a435", 00:07:46.689 "strip_size_kb": 64, 00:07:46.689 "state": "configuring", 00:07:46.689 "raid_level": "raid0", 00:07:46.689 "superblock": true, 00:07:46.689 "num_base_bdevs": 3, 00:07:46.689 "num_base_bdevs_discovered": 1, 00:07:46.689 "num_base_bdevs_operational": 3, 00:07:46.689 "base_bdevs_list": [ 00:07:46.689 { 00:07:46.689 "name": "pt1", 00:07:46.689 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:46.689 "is_configured": true, 00:07:46.689 "data_offset": 2048, 00:07:46.689 "data_size": 63488 00:07:46.689 }, 00:07:46.689 { 00:07:46.689 "name": null, 00:07:46.689 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:46.689 "is_configured": false, 00:07:46.689 "data_offset": 2048, 00:07:46.689 "data_size": 63488 00:07:46.689 }, 00:07:46.690 { 00:07:46.690 "name": null, 00:07:46.690 "uuid": "00000000-0000-0000-0000-000000000003", 00:07:46.690 "is_configured": false, 00:07:46.690 "data_offset": 2048, 00:07:46.690 "data_size": 63488 00:07:46.690 } 00:07:46.690 ] 00:07:46.690 }' 00:07:46.690 18:38:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:46.690 18:38:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.259 18:38:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:07:47.259 18:38:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:47.259 18:38:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.259 18:38:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.259 [2024-12-15 18:38:47.532397] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:47.259 [2024-12-15 18:38:47.532494] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:47.259 [2024-12-15 18:38:47.532518] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:07:47.259 [2024-12-15 18:38:47.532533] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:47.259 [2024-12-15 18:38:47.533065] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:47.259 [2024-12-15 18:38:47.533089] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:47.259 [2024-12-15 18:38:47.533180] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:47.259 [2024-12-15 18:38:47.533226] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:47.259 pt2 00:07:47.259 18:38:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.259 18:38:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:07:47.259 18:38:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.259 18:38:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.259 [2024-12-15 18:38:47.544386] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:07:47.259 18:38:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.259 18:38:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:07:47.259 18:38:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:47.259 18:38:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:47.259 18:38:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:47.259 18:38:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:47.259 18:38:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:47.259 18:38:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:47.259 18:38:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:47.259 18:38:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:47.259 18:38:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:47.259 18:38:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:47.260 18:38:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:47.260 18:38:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.260 18:38:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.260 18:38:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.260 18:38:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:47.260 "name": "raid_bdev1", 00:07:47.260 "uuid": "40e9272d-6b9e-4a48-a3b4-8d124642a435", 00:07:47.260 "strip_size_kb": 64, 00:07:47.260 "state": "configuring", 00:07:47.260 "raid_level": "raid0", 00:07:47.260 "superblock": true, 00:07:47.260 "num_base_bdevs": 3, 00:07:47.260 "num_base_bdevs_discovered": 1, 00:07:47.260 "num_base_bdevs_operational": 3, 00:07:47.260 "base_bdevs_list": [ 00:07:47.260 { 00:07:47.260 "name": "pt1", 00:07:47.260 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:47.260 "is_configured": true, 00:07:47.260 "data_offset": 2048, 00:07:47.260 "data_size": 63488 00:07:47.260 }, 00:07:47.260 { 00:07:47.260 "name": null, 00:07:47.260 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:47.260 "is_configured": false, 00:07:47.260 "data_offset": 0, 00:07:47.260 "data_size": 63488 00:07:47.260 }, 00:07:47.260 { 00:07:47.260 "name": null, 00:07:47.260 "uuid": "00000000-0000-0000-0000-000000000003", 00:07:47.260 "is_configured": false, 00:07:47.260 "data_offset": 2048, 00:07:47.260 "data_size": 63488 00:07:47.260 } 00:07:47.260 ] 00:07:47.260 }' 00:07:47.260 18:38:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:47.260 18:38:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.829 18:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:07:47.829 18:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:47.829 18:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:47.829 18:38:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.829 18:38:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.829 [2024-12-15 18:38:48.011652] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:47.829 [2024-12-15 18:38:48.011869] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:47.829 [2024-12-15 18:38:48.011920] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:07:47.829 [2024-12-15 18:38:48.011986] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:47.829 [2024-12-15 18:38:48.012525] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:47.829 [2024-12-15 18:38:48.012588] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:47.829 [2024-12-15 18:38:48.012718] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:47.829 [2024-12-15 18:38:48.012772] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:47.829 pt2 00:07:47.829 18:38:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.829 18:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:47.829 18:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:47.829 18:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:07:47.829 18:38:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.829 18:38:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.829 [2024-12-15 18:38:48.023600] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:07:47.829 [2024-12-15 18:38:48.023745] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:47.829 [2024-12-15 18:38:48.023774] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:07:47.829 [2024-12-15 18:38:48.023783] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:47.829 [2024-12-15 18:38:48.024287] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:47.829 [2024-12-15 18:38:48.024307] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:07:47.829 [2024-12-15 18:38:48.024403] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:07:47.829 [2024-12-15 18:38:48.024428] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:07:47.829 [2024-12-15 18:38:48.024548] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:07:47.829 [2024-12-15 18:38:48.024556] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:07:47.829 [2024-12-15 18:38:48.024844] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:47.829 [2024-12-15 18:38:48.024975] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:07:47.829 [2024-12-15 18:38:48.024989] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:07:47.829 [2024-12-15 18:38:48.025102] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:47.829 pt3 00:07:47.829 18:38:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.829 18:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:47.829 18:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:47.829 18:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:07:47.829 18:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:47.829 18:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:47.829 18:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:47.829 18:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:47.829 18:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:47.829 18:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:47.829 18:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:47.829 18:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:47.829 18:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:47.829 18:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:47.829 18:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:47.829 18:38:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.829 18:38:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.829 18:38:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.829 18:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:47.829 "name": "raid_bdev1", 00:07:47.829 "uuid": "40e9272d-6b9e-4a48-a3b4-8d124642a435", 00:07:47.829 "strip_size_kb": 64, 00:07:47.829 "state": "online", 00:07:47.829 "raid_level": "raid0", 00:07:47.829 "superblock": true, 00:07:47.829 "num_base_bdevs": 3, 00:07:47.829 "num_base_bdevs_discovered": 3, 00:07:47.829 "num_base_bdevs_operational": 3, 00:07:47.829 "base_bdevs_list": [ 00:07:47.829 { 00:07:47.829 "name": "pt1", 00:07:47.829 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:47.829 "is_configured": true, 00:07:47.829 "data_offset": 2048, 00:07:47.829 "data_size": 63488 00:07:47.829 }, 00:07:47.829 { 00:07:47.829 "name": "pt2", 00:07:47.829 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:47.829 "is_configured": true, 00:07:47.829 "data_offset": 2048, 00:07:47.829 "data_size": 63488 00:07:47.829 }, 00:07:47.829 { 00:07:47.829 "name": "pt3", 00:07:47.829 "uuid": "00000000-0000-0000-0000-000000000003", 00:07:47.829 "is_configured": true, 00:07:47.829 "data_offset": 2048, 00:07:47.829 "data_size": 63488 00:07:47.829 } 00:07:47.829 ] 00:07:47.829 }' 00:07:47.829 18:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:47.829 18:38:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.089 18:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:07:48.089 18:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:48.089 18:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:48.089 18:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:48.089 18:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:48.089 18:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:48.089 18:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:48.089 18:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:48.089 18:38:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.089 18:38:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.089 [2024-12-15 18:38:48.475219] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:48.089 18:38:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.089 18:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:48.089 "name": "raid_bdev1", 00:07:48.089 "aliases": [ 00:07:48.089 "40e9272d-6b9e-4a48-a3b4-8d124642a435" 00:07:48.089 ], 00:07:48.089 "product_name": "Raid Volume", 00:07:48.089 "block_size": 512, 00:07:48.089 "num_blocks": 190464, 00:07:48.089 "uuid": "40e9272d-6b9e-4a48-a3b4-8d124642a435", 00:07:48.089 "assigned_rate_limits": { 00:07:48.089 "rw_ios_per_sec": 0, 00:07:48.089 "rw_mbytes_per_sec": 0, 00:07:48.089 "r_mbytes_per_sec": 0, 00:07:48.089 "w_mbytes_per_sec": 0 00:07:48.089 }, 00:07:48.089 "claimed": false, 00:07:48.089 "zoned": false, 00:07:48.089 "supported_io_types": { 00:07:48.089 "read": true, 00:07:48.089 "write": true, 00:07:48.089 "unmap": true, 00:07:48.089 "flush": true, 00:07:48.089 "reset": true, 00:07:48.089 "nvme_admin": false, 00:07:48.089 "nvme_io": false, 00:07:48.089 "nvme_io_md": false, 00:07:48.089 "write_zeroes": true, 00:07:48.089 "zcopy": false, 00:07:48.089 "get_zone_info": false, 00:07:48.089 "zone_management": false, 00:07:48.089 "zone_append": false, 00:07:48.089 "compare": false, 00:07:48.089 "compare_and_write": false, 00:07:48.089 "abort": false, 00:07:48.089 "seek_hole": false, 00:07:48.089 "seek_data": false, 00:07:48.089 "copy": false, 00:07:48.089 "nvme_iov_md": false 00:07:48.089 }, 00:07:48.089 "memory_domains": [ 00:07:48.089 { 00:07:48.089 "dma_device_id": "system", 00:07:48.089 "dma_device_type": 1 00:07:48.089 }, 00:07:48.089 { 00:07:48.089 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:48.089 "dma_device_type": 2 00:07:48.089 }, 00:07:48.089 { 00:07:48.089 "dma_device_id": "system", 00:07:48.089 "dma_device_type": 1 00:07:48.089 }, 00:07:48.089 { 00:07:48.089 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:48.089 "dma_device_type": 2 00:07:48.089 }, 00:07:48.089 { 00:07:48.089 "dma_device_id": "system", 00:07:48.089 "dma_device_type": 1 00:07:48.089 }, 00:07:48.089 { 00:07:48.089 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:48.089 "dma_device_type": 2 00:07:48.089 } 00:07:48.089 ], 00:07:48.089 "driver_specific": { 00:07:48.089 "raid": { 00:07:48.089 "uuid": "40e9272d-6b9e-4a48-a3b4-8d124642a435", 00:07:48.089 "strip_size_kb": 64, 00:07:48.089 "state": "online", 00:07:48.089 "raid_level": "raid0", 00:07:48.089 "superblock": true, 00:07:48.089 "num_base_bdevs": 3, 00:07:48.089 "num_base_bdevs_discovered": 3, 00:07:48.089 "num_base_bdevs_operational": 3, 00:07:48.089 "base_bdevs_list": [ 00:07:48.089 { 00:07:48.089 "name": "pt1", 00:07:48.089 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:48.089 "is_configured": true, 00:07:48.089 "data_offset": 2048, 00:07:48.089 "data_size": 63488 00:07:48.089 }, 00:07:48.089 { 00:07:48.089 "name": "pt2", 00:07:48.089 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:48.089 "is_configured": true, 00:07:48.089 "data_offset": 2048, 00:07:48.089 "data_size": 63488 00:07:48.089 }, 00:07:48.089 { 00:07:48.089 "name": "pt3", 00:07:48.089 "uuid": "00000000-0000-0000-0000-000000000003", 00:07:48.089 "is_configured": true, 00:07:48.089 "data_offset": 2048, 00:07:48.089 "data_size": 63488 00:07:48.089 } 00:07:48.089 ] 00:07:48.089 } 00:07:48.089 } 00:07:48.089 }' 00:07:48.089 18:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:48.348 18:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:48.348 pt2 00:07:48.348 pt3' 00:07:48.348 18:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:48.348 18:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:48.348 18:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:48.348 18:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:48.348 18:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:48.348 18:38:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.348 18:38:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.348 18:38:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.348 18:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:48.348 18:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:48.348 18:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:48.348 18:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:48.348 18:38:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.348 18:38:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.348 18:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:48.348 18:38:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.348 18:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:48.348 18:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:48.348 18:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:48.348 18:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:07:48.348 18:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:48.348 18:38:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.349 18:38:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.349 18:38:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.349 18:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:48.349 18:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:48.349 18:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:07:48.349 18:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:48.349 18:38:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.349 18:38:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.349 [2024-12-15 18:38:48.754671] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:48.349 18:38:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.608 18:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 40e9272d-6b9e-4a48-a3b4-8d124642a435 '!=' 40e9272d-6b9e-4a48-a3b4-8d124642a435 ']' 00:07:48.608 18:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:07:48.608 18:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:48.608 18:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:48.608 18:38:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 78170 00:07:48.608 18:38:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 78170 ']' 00:07:48.608 18:38:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 78170 00:07:48.608 18:38:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:07:48.608 18:38:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:48.608 18:38:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78170 00:07:48.608 killing process with pid 78170 00:07:48.608 18:38:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:48.608 18:38:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:48.608 18:38:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78170' 00:07:48.608 18:38:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 78170 00:07:48.608 [2024-12-15 18:38:48.837577] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:48.608 18:38:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 78170 00:07:48.609 [2024-12-15 18:38:48.837730] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:48.609 [2024-12-15 18:38:48.837832] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:48.609 [2024-12-15 18:38:48.837843] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:07:48.609 [2024-12-15 18:38:48.901082] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:48.868 18:38:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:07:48.868 00:07:48.868 real 0m4.240s 00:07:48.868 user 0m6.507s 00:07:48.868 sys 0m1.008s 00:07:48.868 18:38:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:48.868 18:38:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.868 ************************************ 00:07:48.868 END TEST raid_superblock_test 00:07:48.868 ************************************ 00:07:48.868 18:38:49 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:07:48.868 18:38:49 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:48.868 18:38:49 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:48.868 18:38:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:48.868 ************************************ 00:07:48.868 START TEST raid_read_error_test 00:07:48.868 ************************************ 00:07:49.128 18:38:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 read 00:07:49.128 18:38:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:07:49.128 18:38:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:07:49.128 18:38:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:07:49.128 18:38:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:49.128 18:38:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:49.128 18:38:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:49.128 18:38:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:49.128 18:38:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:49.128 18:38:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:49.128 18:38:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:49.128 18:38:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:49.128 18:38:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:07:49.128 18:38:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:49.128 18:38:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:49.128 18:38:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:07:49.128 18:38:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:49.128 18:38:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:49.128 18:38:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:49.128 18:38:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:49.128 18:38:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:49.128 18:38:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:49.128 18:38:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:07:49.128 18:38:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:49.128 18:38:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:49.128 18:38:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:49.128 18:38:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.8oi3TJB8JG 00:07:49.128 18:38:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=78413 00:07:49.128 18:38:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:49.128 18:38:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 78413 00:07:49.128 18:38:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 78413 ']' 00:07:49.128 18:38:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:49.128 18:38:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:49.128 18:38:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:49.128 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:49.128 18:38:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:49.128 18:38:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.128 [2024-12-15 18:38:49.421436] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:07:49.128 [2024-12-15 18:38:49.421692] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78413 ] 00:07:49.388 [2024-12-15 18:38:49.598475] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.388 [2024-12-15 18:38:49.642231] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.388 [2024-12-15 18:38:49.718496] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:49.388 [2024-12-15 18:38:49.718536] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:49.958 18:38:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:49.958 18:38:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:07:49.958 18:38:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:49.958 18:38:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:49.958 18:38:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.958 18:38:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.959 BaseBdev1_malloc 00:07:49.959 18:38:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.959 18:38:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:49.959 18:38:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.959 18:38:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.959 true 00:07:49.959 18:38:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.959 18:38:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:49.959 18:38:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.959 18:38:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.959 [2024-12-15 18:38:50.276747] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:49.959 [2024-12-15 18:38:50.276857] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:49.959 [2024-12-15 18:38:50.276893] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:49.959 [2024-12-15 18:38:50.276911] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:49.959 [2024-12-15 18:38:50.279616] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:49.959 [2024-12-15 18:38:50.279660] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:49.959 BaseBdev1 00:07:49.959 18:38:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.959 18:38:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:49.959 18:38:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:49.959 18:38:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.959 18:38:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.959 BaseBdev2_malloc 00:07:49.959 18:38:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.959 18:38:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:49.959 18:38:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.959 18:38:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.959 true 00:07:49.959 18:38:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.959 18:38:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:49.959 18:38:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.959 18:38:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.959 [2024-12-15 18:38:50.323849] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:49.959 [2024-12-15 18:38:50.323914] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:49.959 [2024-12-15 18:38:50.323938] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:49.959 [2024-12-15 18:38:50.323946] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:49.959 [2024-12-15 18:38:50.326365] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:49.959 [2024-12-15 18:38:50.326494] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:49.959 BaseBdev2 00:07:49.959 18:38:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.959 18:38:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:49.959 18:38:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:07:49.959 18:38:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.959 18:38:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.959 BaseBdev3_malloc 00:07:49.959 18:38:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.959 18:38:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:07:49.959 18:38:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.959 18:38:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.959 true 00:07:49.959 18:38:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.959 18:38:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:07:49.959 18:38:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.959 18:38:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.959 [2024-12-15 18:38:50.370821] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:07:49.959 [2024-12-15 18:38:50.370879] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:49.959 [2024-12-15 18:38:50.370901] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:07:49.959 [2024-12-15 18:38:50.370911] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:49.959 [2024-12-15 18:38:50.373148] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:49.959 [2024-12-15 18:38:50.373271] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:07:49.959 BaseBdev3 00:07:49.959 18:38:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.959 18:38:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:07:49.959 18:38:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.959 18:38:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.959 [2024-12-15 18:38:50.382874] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:49.959 [2024-12-15 18:38:50.384893] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:49.959 [2024-12-15 18:38:50.384972] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:07:49.959 [2024-12-15 18:38:50.385149] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:07:49.959 [2024-12-15 18:38:50.385163] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:07:49.959 [2024-12-15 18:38:50.385426] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:07:49.959 [2024-12-15 18:38:50.385560] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:07:49.959 [2024-12-15 18:38:50.385569] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:07:49.959 [2024-12-15 18:38:50.385711] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:49.959 18:38:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.959 18:38:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:07:49.959 18:38:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:49.959 18:38:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:49.959 18:38:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:49.959 18:38:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:49.959 18:38:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:49.959 18:38:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:49.959 18:38:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:49.959 18:38:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:49.959 18:38:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:49.959 18:38:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:49.959 18:38:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:49.959 18:38:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.959 18:38:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.219 18:38:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.219 18:38:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:50.219 "name": "raid_bdev1", 00:07:50.219 "uuid": "65ec2879-57c8-4c78-9e07-d87468a890f7", 00:07:50.219 "strip_size_kb": 64, 00:07:50.219 "state": "online", 00:07:50.219 "raid_level": "raid0", 00:07:50.219 "superblock": true, 00:07:50.219 "num_base_bdevs": 3, 00:07:50.219 "num_base_bdevs_discovered": 3, 00:07:50.219 "num_base_bdevs_operational": 3, 00:07:50.219 "base_bdevs_list": [ 00:07:50.219 { 00:07:50.219 "name": "BaseBdev1", 00:07:50.219 "uuid": "36c9801f-cb66-5ee9-b716-7d5830e38d18", 00:07:50.219 "is_configured": true, 00:07:50.219 "data_offset": 2048, 00:07:50.219 "data_size": 63488 00:07:50.219 }, 00:07:50.219 { 00:07:50.219 "name": "BaseBdev2", 00:07:50.219 "uuid": "3dbe0581-17c1-5bf3-b32b-28801c70c018", 00:07:50.219 "is_configured": true, 00:07:50.219 "data_offset": 2048, 00:07:50.219 "data_size": 63488 00:07:50.219 }, 00:07:50.219 { 00:07:50.219 "name": "BaseBdev3", 00:07:50.219 "uuid": "4031bef0-d7a2-5db6-852d-63fdf6f8462a", 00:07:50.219 "is_configured": true, 00:07:50.219 "data_offset": 2048, 00:07:50.219 "data_size": 63488 00:07:50.219 } 00:07:50.219 ] 00:07:50.219 }' 00:07:50.219 18:38:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:50.219 18:38:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.492 18:38:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:50.492 18:38:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:50.492 [2024-12-15 18:38:50.922501] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:07:51.455 18:38:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:07:51.455 18:38:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.455 18:38:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.455 18:38:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.455 18:38:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:51.455 18:38:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:07:51.455 18:38:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:07:51.455 18:38:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:07:51.455 18:38:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:51.455 18:38:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:51.455 18:38:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:51.455 18:38:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:51.455 18:38:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:51.455 18:38:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:51.455 18:38:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:51.455 18:38:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:51.455 18:38:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:51.455 18:38:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:51.455 18:38:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:51.455 18:38:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.455 18:38:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.455 18:38:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.455 18:38:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:51.455 "name": "raid_bdev1", 00:07:51.455 "uuid": "65ec2879-57c8-4c78-9e07-d87468a890f7", 00:07:51.455 "strip_size_kb": 64, 00:07:51.455 "state": "online", 00:07:51.455 "raid_level": "raid0", 00:07:51.455 "superblock": true, 00:07:51.455 "num_base_bdevs": 3, 00:07:51.455 "num_base_bdevs_discovered": 3, 00:07:51.455 "num_base_bdevs_operational": 3, 00:07:51.455 "base_bdevs_list": [ 00:07:51.455 { 00:07:51.455 "name": "BaseBdev1", 00:07:51.455 "uuid": "36c9801f-cb66-5ee9-b716-7d5830e38d18", 00:07:51.455 "is_configured": true, 00:07:51.455 "data_offset": 2048, 00:07:51.455 "data_size": 63488 00:07:51.455 }, 00:07:51.455 { 00:07:51.455 "name": "BaseBdev2", 00:07:51.455 "uuid": "3dbe0581-17c1-5bf3-b32b-28801c70c018", 00:07:51.455 "is_configured": true, 00:07:51.455 "data_offset": 2048, 00:07:51.455 "data_size": 63488 00:07:51.455 }, 00:07:51.455 { 00:07:51.455 "name": "BaseBdev3", 00:07:51.455 "uuid": "4031bef0-d7a2-5db6-852d-63fdf6f8462a", 00:07:51.455 "is_configured": true, 00:07:51.455 "data_offset": 2048, 00:07:51.455 "data_size": 63488 00:07:51.455 } 00:07:51.455 ] 00:07:51.455 }' 00:07:51.455 18:38:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:51.455 18:38:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.025 18:38:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:52.025 18:38:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.025 18:38:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.025 [2024-12-15 18:38:52.255762] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:52.025 [2024-12-15 18:38:52.255853] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:52.025 [2024-12-15 18:38:52.258439] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:52.025 [2024-12-15 18:38:52.258497] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:52.025 [2024-12-15 18:38:52.258537] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:52.025 [2024-12-15 18:38:52.258549] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:07:52.025 { 00:07:52.025 "results": [ 00:07:52.025 { 00:07:52.025 "job": "raid_bdev1", 00:07:52.025 "core_mask": "0x1", 00:07:52.025 "workload": "randrw", 00:07:52.025 "percentage": 50, 00:07:52.025 "status": "finished", 00:07:52.025 "queue_depth": 1, 00:07:52.025 "io_size": 131072, 00:07:52.025 "runtime": 1.333695, 00:07:52.025 "iops": 13966.461597291735, 00:07:52.025 "mibps": 1745.8076996614668, 00:07:52.025 "io_failed": 1, 00:07:52.025 "io_timeout": 0, 00:07:52.025 "avg_latency_us": 100.57242991486733, 00:07:52.025 "min_latency_us": 25.2646288209607, 00:07:52.025 "max_latency_us": 1473.844541484716 00:07:52.025 } 00:07:52.025 ], 00:07:52.025 "core_count": 1 00:07:52.025 } 00:07:52.025 18:38:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.025 18:38:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 78413 00:07:52.025 18:38:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 78413 ']' 00:07:52.025 18:38:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 78413 00:07:52.025 18:38:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:07:52.025 18:38:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:52.025 18:38:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78413 00:07:52.025 killing process with pid 78413 00:07:52.025 18:38:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:52.025 18:38:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:52.025 18:38:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78413' 00:07:52.025 18:38:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 78413 00:07:52.025 [2024-12-15 18:38:52.307940] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:52.025 18:38:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 78413 00:07:52.025 [2024-12-15 18:38:52.355954] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:52.285 18:38:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:52.285 18:38:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:52.285 18:38:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.8oi3TJB8JG 00:07:52.285 18:38:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.75 00:07:52.285 18:38:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:07:52.285 18:38:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:52.285 18:38:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:52.285 18:38:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.75 != \0\.\0\0 ]] 00:07:52.285 00:07:52.285 real 0m3.387s 00:07:52.285 user 0m4.146s 00:07:52.285 sys 0m0.627s 00:07:52.285 ************************************ 00:07:52.285 END TEST raid_read_error_test 00:07:52.285 ************************************ 00:07:52.285 18:38:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:52.285 18:38:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.545 18:38:52 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:07:52.545 18:38:52 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:52.545 18:38:52 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:52.545 18:38:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:52.545 ************************************ 00:07:52.545 START TEST raid_write_error_test 00:07:52.545 ************************************ 00:07:52.545 18:38:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 write 00:07:52.545 18:38:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:07:52.545 18:38:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:07:52.545 18:38:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:07:52.545 18:38:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:52.545 18:38:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:52.545 18:38:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:52.545 18:38:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:52.545 18:38:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:52.545 18:38:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:52.545 18:38:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:52.545 18:38:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:52.545 18:38:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:07:52.545 18:38:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:52.545 18:38:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:52.545 18:38:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:07:52.545 18:38:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:52.545 18:38:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:52.545 18:38:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:52.545 18:38:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:52.545 18:38:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:52.545 18:38:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:52.545 18:38:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:07:52.545 18:38:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:52.545 18:38:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:52.545 18:38:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:52.545 18:38:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.MDDsosx6UQ 00:07:52.545 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:52.545 18:38:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=78548 00:07:52.545 18:38:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:52.545 18:38:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 78548 00:07:52.545 18:38:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 78548 ']' 00:07:52.545 18:38:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:52.545 18:38:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:52.545 18:38:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:52.545 18:38:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:52.545 18:38:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.545 [2024-12-15 18:38:52.847633] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:07:52.545 [2024-12-15 18:38:52.847769] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78548 ] 00:07:52.804 [2024-12-15 18:38:52.998227] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.804 [2024-12-15 18:38:53.042138] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.804 [2024-12-15 18:38:53.118125] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:52.804 [2024-12-15 18:38:53.118165] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:53.373 18:38:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:53.373 18:38:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:07:53.373 18:38:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:53.373 18:38:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:53.373 18:38:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.373 18:38:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.373 BaseBdev1_malloc 00:07:53.373 18:38:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.373 18:38:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:53.373 18:38:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.373 18:38:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.373 true 00:07:53.373 18:38:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.373 18:38:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:53.373 18:38:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.373 18:38:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.373 [2024-12-15 18:38:53.763977] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:53.373 [2024-12-15 18:38:53.764158] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:53.373 [2024-12-15 18:38:53.764209] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:53.373 [2024-12-15 18:38:53.764238] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:53.373 [2024-12-15 18:38:53.766939] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:53.373 [2024-12-15 18:38:53.767011] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:53.373 BaseBdev1 00:07:53.373 18:38:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.373 18:38:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:53.373 18:38:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:53.373 18:38:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.373 18:38:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.373 BaseBdev2_malloc 00:07:53.373 18:38:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.373 18:38:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:53.373 18:38:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.373 18:38:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.373 true 00:07:53.373 18:38:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.373 18:38:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:53.373 18:38:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.373 18:38:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.373 [2024-12-15 18:38:53.811540] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:53.373 [2024-12-15 18:38:53.811613] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:53.373 [2024-12-15 18:38:53.811640] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:53.373 [2024-12-15 18:38:53.811648] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:53.633 [2024-12-15 18:38:53.814129] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:53.633 [2024-12-15 18:38:53.814164] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:53.633 BaseBdev2 00:07:53.633 18:38:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.633 18:38:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:53.633 18:38:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:07:53.633 18:38:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.633 18:38:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.633 BaseBdev3_malloc 00:07:53.634 18:38:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.634 18:38:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:07:53.634 18:38:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.634 18:38:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.634 true 00:07:53.634 18:38:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.634 18:38:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:07:53.634 18:38:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.634 18:38:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.634 [2024-12-15 18:38:53.858769] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:07:53.634 [2024-12-15 18:38:53.858923] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:53.634 [2024-12-15 18:38:53.858957] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:07:53.634 [2024-12-15 18:38:53.858967] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:53.634 [2024-12-15 18:38:53.861465] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:53.634 [2024-12-15 18:38:53.861556] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:07:53.634 BaseBdev3 00:07:53.634 18:38:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.634 18:38:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:07:53.634 18:38:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.634 18:38:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.634 [2024-12-15 18:38:53.870835] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:53.634 [2024-12-15 18:38:53.872970] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:53.634 [2024-12-15 18:38:53.873054] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:07:53.634 [2024-12-15 18:38:53.873238] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:07:53.634 [2024-12-15 18:38:53.873253] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:07:53.634 [2024-12-15 18:38:53.873536] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:07:53.634 [2024-12-15 18:38:53.873706] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:07:53.634 [2024-12-15 18:38:53.873717] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:07:53.634 [2024-12-15 18:38:53.873871] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:53.634 18:38:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.634 18:38:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:07:53.634 18:38:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:53.634 18:38:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:53.634 18:38:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:53.634 18:38:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:53.634 18:38:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:53.634 18:38:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:53.634 18:38:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:53.634 18:38:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:53.634 18:38:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:53.634 18:38:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:53.634 18:38:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:53.634 18:38:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.634 18:38:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.634 18:38:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.634 18:38:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:53.634 "name": "raid_bdev1", 00:07:53.634 "uuid": "9327f789-1222-48a0-8235-4e5acba0e03c", 00:07:53.634 "strip_size_kb": 64, 00:07:53.634 "state": "online", 00:07:53.634 "raid_level": "raid0", 00:07:53.634 "superblock": true, 00:07:53.634 "num_base_bdevs": 3, 00:07:53.634 "num_base_bdevs_discovered": 3, 00:07:53.634 "num_base_bdevs_operational": 3, 00:07:53.634 "base_bdevs_list": [ 00:07:53.634 { 00:07:53.634 "name": "BaseBdev1", 00:07:53.634 "uuid": "f4eed9cc-a7ba-519f-bc74-4da17e09a444", 00:07:53.634 "is_configured": true, 00:07:53.634 "data_offset": 2048, 00:07:53.634 "data_size": 63488 00:07:53.634 }, 00:07:53.634 { 00:07:53.634 "name": "BaseBdev2", 00:07:53.634 "uuid": "da3186f0-59f0-5672-ba7e-d9ed49e0eacb", 00:07:53.634 "is_configured": true, 00:07:53.634 "data_offset": 2048, 00:07:53.634 "data_size": 63488 00:07:53.634 }, 00:07:53.634 { 00:07:53.634 "name": "BaseBdev3", 00:07:53.634 "uuid": "e1469fba-0823-5636-857f-a98ad46497b6", 00:07:53.634 "is_configured": true, 00:07:53.634 "data_offset": 2048, 00:07:53.634 "data_size": 63488 00:07:53.634 } 00:07:53.634 ] 00:07:53.634 }' 00:07:53.634 18:38:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:53.634 18:38:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.893 18:38:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:53.893 18:38:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:54.153 [2024-12-15 18:38:54.414398] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:07:55.091 18:38:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:07:55.091 18:38:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.091 18:38:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.091 18:38:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.091 18:38:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:55.091 18:38:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:07:55.091 18:38:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:07:55.091 18:38:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:07:55.091 18:38:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:55.091 18:38:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:55.091 18:38:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:55.091 18:38:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:55.091 18:38:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:55.091 18:38:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:55.091 18:38:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:55.091 18:38:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:55.091 18:38:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:55.091 18:38:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:55.091 18:38:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.091 18:38:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.091 18:38:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:55.091 18:38:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.091 18:38:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:55.091 "name": "raid_bdev1", 00:07:55.091 "uuid": "9327f789-1222-48a0-8235-4e5acba0e03c", 00:07:55.091 "strip_size_kb": 64, 00:07:55.091 "state": "online", 00:07:55.091 "raid_level": "raid0", 00:07:55.091 "superblock": true, 00:07:55.091 "num_base_bdevs": 3, 00:07:55.091 "num_base_bdevs_discovered": 3, 00:07:55.091 "num_base_bdevs_operational": 3, 00:07:55.091 "base_bdevs_list": [ 00:07:55.091 { 00:07:55.091 "name": "BaseBdev1", 00:07:55.091 "uuid": "f4eed9cc-a7ba-519f-bc74-4da17e09a444", 00:07:55.091 "is_configured": true, 00:07:55.091 "data_offset": 2048, 00:07:55.091 "data_size": 63488 00:07:55.091 }, 00:07:55.091 { 00:07:55.091 "name": "BaseBdev2", 00:07:55.091 "uuid": "da3186f0-59f0-5672-ba7e-d9ed49e0eacb", 00:07:55.091 "is_configured": true, 00:07:55.091 "data_offset": 2048, 00:07:55.091 "data_size": 63488 00:07:55.091 }, 00:07:55.091 { 00:07:55.091 "name": "BaseBdev3", 00:07:55.091 "uuid": "e1469fba-0823-5636-857f-a98ad46497b6", 00:07:55.091 "is_configured": true, 00:07:55.091 "data_offset": 2048, 00:07:55.091 "data_size": 63488 00:07:55.091 } 00:07:55.091 ] 00:07:55.091 }' 00:07:55.091 18:38:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:55.091 18:38:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.660 18:38:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:55.660 18:38:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.660 18:38:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.660 [2024-12-15 18:38:55.796266] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:55.660 [2024-12-15 18:38:55.796324] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:55.660 [2024-12-15 18:38:55.798822] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:55.660 [2024-12-15 18:38:55.798880] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:55.660 [2024-12-15 18:38:55.798923] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:55.660 [2024-12-15 18:38:55.798941] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:07:55.660 { 00:07:55.660 "results": [ 00:07:55.660 { 00:07:55.660 "job": "raid_bdev1", 00:07:55.660 "core_mask": "0x1", 00:07:55.660 "workload": "randrw", 00:07:55.660 "percentage": 50, 00:07:55.660 "status": "finished", 00:07:55.660 "queue_depth": 1, 00:07:55.660 "io_size": 131072, 00:07:55.660 "runtime": 1.382091, 00:07:55.660 "iops": 13955.665726786441, 00:07:55.660 "mibps": 1744.4582158483051, 00:07:55.660 "io_failed": 1, 00:07:55.660 "io_timeout": 0, 00:07:55.661 "avg_latency_us": 100.32468255206206, 00:07:55.661 "min_latency_us": 25.6, 00:07:55.661 "max_latency_us": 1423.7624454148472 00:07:55.661 } 00:07:55.661 ], 00:07:55.661 "core_count": 1 00:07:55.661 } 00:07:55.661 18:38:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.661 18:38:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 78548 00:07:55.661 18:38:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 78548 ']' 00:07:55.661 18:38:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 78548 00:07:55.661 18:38:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:07:55.661 18:38:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:55.661 18:38:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78548 00:07:55.661 killing process with pid 78548 00:07:55.661 18:38:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:55.661 18:38:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:55.661 18:38:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78548' 00:07:55.661 18:38:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 78548 00:07:55.661 [2024-12-15 18:38:55.834271] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:55.661 18:38:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 78548 00:07:55.661 [2024-12-15 18:38:55.883054] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:55.921 18:38:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:55.921 18:38:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.MDDsosx6UQ 00:07:55.921 18:38:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:55.921 ************************************ 00:07:55.921 END TEST raid_write_error_test 00:07:55.921 ************************************ 00:07:55.921 18:38:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:07:55.921 18:38:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:07:55.921 18:38:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:55.921 18:38:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:55.921 18:38:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:07:55.921 00:07:55.921 real 0m3.452s 00:07:55.921 user 0m4.297s 00:07:55.921 sys 0m0.600s 00:07:55.921 18:38:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:55.921 18:38:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.921 18:38:56 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:55.921 18:38:56 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:07:55.921 18:38:56 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:55.921 18:38:56 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:55.921 18:38:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:55.921 ************************************ 00:07:55.921 START TEST raid_state_function_test 00:07:55.921 ************************************ 00:07:55.921 18:38:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 false 00:07:55.921 18:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:07:55.921 18:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:07:55.922 18:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:55.922 18:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:55.922 18:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:55.922 18:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:55.922 18:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:55.922 18:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:55.922 18:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:55.922 18:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:55.922 18:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:55.922 18:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:55.922 18:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:07:55.922 18:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:55.922 18:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:55.922 18:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:07:55.922 18:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:55.922 18:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:55.922 18:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:55.922 18:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:55.922 18:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:55.922 18:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:07:55.922 18:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:55.922 18:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:55.922 18:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:55.922 18:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:55.922 18:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=78680 00:07:55.922 18:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:55.922 18:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 78680' 00:07:55.922 Process raid pid: 78680 00:07:55.922 18:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 78680 00:07:55.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:55.922 18:38:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 78680 ']' 00:07:55.922 18:38:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:55.922 18:38:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:55.922 18:38:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:55.922 18:38:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:55.922 18:38:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.182 [2024-12-15 18:38:56.392084] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:07:56.182 [2024-12-15 18:38:56.392232] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:56.182 [2024-12-15 18:38:56.567067] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.182 [2024-12-15 18:38:56.609421] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.442 [2024-12-15 18:38:56.687478] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:56.442 [2024-12-15 18:38:56.687522] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:57.011 18:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:57.011 18:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:07:57.011 18:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:07:57.011 18:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.011 18:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.011 [2024-12-15 18:38:57.245516] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:57.011 [2024-12-15 18:38:57.245585] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:57.011 [2024-12-15 18:38:57.245595] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:57.011 [2024-12-15 18:38:57.245605] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:57.011 [2024-12-15 18:38:57.245611] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:07:57.011 [2024-12-15 18:38:57.245623] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:07:57.011 18:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.011 18:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:07:57.011 18:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:57.011 18:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:57.011 18:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:57.011 18:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:57.011 18:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:57.011 18:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:57.011 18:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:57.011 18:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:57.011 18:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:57.011 18:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:57.011 18:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.011 18:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.011 18:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:57.011 18:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.011 18:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:57.011 "name": "Existed_Raid", 00:07:57.011 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:57.011 "strip_size_kb": 64, 00:07:57.011 "state": "configuring", 00:07:57.011 "raid_level": "concat", 00:07:57.011 "superblock": false, 00:07:57.011 "num_base_bdevs": 3, 00:07:57.011 "num_base_bdevs_discovered": 0, 00:07:57.011 "num_base_bdevs_operational": 3, 00:07:57.011 "base_bdevs_list": [ 00:07:57.011 { 00:07:57.011 "name": "BaseBdev1", 00:07:57.011 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:57.011 "is_configured": false, 00:07:57.011 "data_offset": 0, 00:07:57.011 "data_size": 0 00:07:57.011 }, 00:07:57.011 { 00:07:57.011 "name": "BaseBdev2", 00:07:57.011 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:57.011 "is_configured": false, 00:07:57.011 "data_offset": 0, 00:07:57.011 "data_size": 0 00:07:57.011 }, 00:07:57.011 { 00:07:57.011 "name": "BaseBdev3", 00:07:57.011 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:57.011 "is_configured": false, 00:07:57.011 "data_offset": 0, 00:07:57.011 "data_size": 0 00:07:57.011 } 00:07:57.011 ] 00:07:57.011 }' 00:07:57.011 18:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:57.012 18:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.271 18:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:57.271 18:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.271 18:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.271 [2024-12-15 18:38:57.656807] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:57.271 [2024-12-15 18:38:57.656893] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:07:57.271 18:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.271 18:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:07:57.271 18:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.271 18:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.271 [2024-12-15 18:38:57.668751] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:57.271 [2024-12-15 18:38:57.668816] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:57.271 [2024-12-15 18:38:57.668825] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:57.271 [2024-12-15 18:38:57.668836] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:57.271 [2024-12-15 18:38:57.668842] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:07:57.271 [2024-12-15 18:38:57.668851] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:07:57.271 18:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.271 18:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:57.271 18:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.271 18:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.271 [2024-12-15 18:38:57.696243] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:57.271 BaseBdev1 00:07:57.271 18:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.271 18:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:57.271 18:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:57.271 18:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:57.272 18:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:57.272 18:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:57.272 18:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:57.272 18:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:57.272 18:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.272 18:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.272 18:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.272 18:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:57.272 18:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.272 18:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.532 [ 00:07:57.532 { 00:07:57.532 "name": "BaseBdev1", 00:07:57.532 "aliases": [ 00:07:57.532 "c72a36ec-42f1-4d0f-a4c8-ab80117b9364" 00:07:57.532 ], 00:07:57.532 "product_name": "Malloc disk", 00:07:57.532 "block_size": 512, 00:07:57.532 "num_blocks": 65536, 00:07:57.532 "uuid": "c72a36ec-42f1-4d0f-a4c8-ab80117b9364", 00:07:57.532 "assigned_rate_limits": { 00:07:57.532 "rw_ios_per_sec": 0, 00:07:57.532 "rw_mbytes_per_sec": 0, 00:07:57.532 "r_mbytes_per_sec": 0, 00:07:57.532 "w_mbytes_per_sec": 0 00:07:57.532 }, 00:07:57.532 "claimed": true, 00:07:57.532 "claim_type": "exclusive_write", 00:07:57.532 "zoned": false, 00:07:57.532 "supported_io_types": { 00:07:57.532 "read": true, 00:07:57.532 "write": true, 00:07:57.532 "unmap": true, 00:07:57.532 "flush": true, 00:07:57.532 "reset": true, 00:07:57.532 "nvme_admin": false, 00:07:57.532 "nvme_io": false, 00:07:57.532 "nvme_io_md": false, 00:07:57.532 "write_zeroes": true, 00:07:57.532 "zcopy": true, 00:07:57.532 "get_zone_info": false, 00:07:57.532 "zone_management": false, 00:07:57.532 "zone_append": false, 00:07:57.532 "compare": false, 00:07:57.532 "compare_and_write": false, 00:07:57.532 "abort": true, 00:07:57.532 "seek_hole": false, 00:07:57.532 "seek_data": false, 00:07:57.532 "copy": true, 00:07:57.532 "nvme_iov_md": false 00:07:57.532 }, 00:07:57.532 "memory_domains": [ 00:07:57.532 { 00:07:57.532 "dma_device_id": "system", 00:07:57.532 "dma_device_type": 1 00:07:57.532 }, 00:07:57.532 { 00:07:57.532 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:57.532 "dma_device_type": 2 00:07:57.532 } 00:07:57.532 ], 00:07:57.532 "driver_specific": {} 00:07:57.532 } 00:07:57.532 ] 00:07:57.532 18:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.532 18:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:57.532 18:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:07:57.532 18:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:57.532 18:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:57.532 18:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:57.532 18:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:57.532 18:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:57.532 18:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:57.532 18:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:57.532 18:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:57.532 18:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:57.532 18:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:57.532 18:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:57.532 18:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.532 18:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.532 18:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.532 18:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:57.532 "name": "Existed_Raid", 00:07:57.532 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:57.532 "strip_size_kb": 64, 00:07:57.532 "state": "configuring", 00:07:57.532 "raid_level": "concat", 00:07:57.532 "superblock": false, 00:07:57.532 "num_base_bdevs": 3, 00:07:57.532 "num_base_bdevs_discovered": 1, 00:07:57.532 "num_base_bdevs_operational": 3, 00:07:57.532 "base_bdevs_list": [ 00:07:57.532 { 00:07:57.532 "name": "BaseBdev1", 00:07:57.532 "uuid": "c72a36ec-42f1-4d0f-a4c8-ab80117b9364", 00:07:57.532 "is_configured": true, 00:07:57.532 "data_offset": 0, 00:07:57.532 "data_size": 65536 00:07:57.532 }, 00:07:57.532 { 00:07:57.532 "name": "BaseBdev2", 00:07:57.532 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:57.532 "is_configured": false, 00:07:57.532 "data_offset": 0, 00:07:57.532 "data_size": 0 00:07:57.532 }, 00:07:57.532 { 00:07:57.532 "name": "BaseBdev3", 00:07:57.532 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:57.532 "is_configured": false, 00:07:57.532 "data_offset": 0, 00:07:57.532 "data_size": 0 00:07:57.532 } 00:07:57.532 ] 00:07:57.532 }' 00:07:57.532 18:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:57.532 18:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.799 18:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:57.799 18:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.799 18:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.799 [2024-12-15 18:38:58.163599] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:57.799 [2024-12-15 18:38:58.163687] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:07:57.799 18:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.799 18:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:07:57.799 18:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.799 18:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.799 [2024-12-15 18:38:58.175570] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:57.799 [2024-12-15 18:38:58.177898] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:57.799 [2024-12-15 18:38:58.177944] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:57.799 [2024-12-15 18:38:58.177954] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:07:57.799 [2024-12-15 18:38:58.177966] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:07:57.799 18:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.799 18:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:57.799 18:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:57.799 18:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:07:57.799 18:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:57.799 18:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:57.799 18:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:57.799 18:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:57.799 18:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:57.799 18:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:57.799 18:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:57.799 18:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:57.799 18:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:57.799 18:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:57.799 18:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.799 18:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:57.799 18:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.799 18:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.799 18:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:57.799 "name": "Existed_Raid", 00:07:57.799 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:57.799 "strip_size_kb": 64, 00:07:57.799 "state": "configuring", 00:07:57.799 "raid_level": "concat", 00:07:57.799 "superblock": false, 00:07:57.799 "num_base_bdevs": 3, 00:07:57.799 "num_base_bdevs_discovered": 1, 00:07:57.799 "num_base_bdevs_operational": 3, 00:07:57.799 "base_bdevs_list": [ 00:07:57.799 { 00:07:57.799 "name": "BaseBdev1", 00:07:57.799 "uuid": "c72a36ec-42f1-4d0f-a4c8-ab80117b9364", 00:07:57.799 "is_configured": true, 00:07:57.799 "data_offset": 0, 00:07:57.799 "data_size": 65536 00:07:57.799 }, 00:07:57.799 { 00:07:57.799 "name": "BaseBdev2", 00:07:57.799 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:57.799 "is_configured": false, 00:07:57.799 "data_offset": 0, 00:07:57.799 "data_size": 0 00:07:57.799 }, 00:07:57.799 { 00:07:57.799 "name": "BaseBdev3", 00:07:57.799 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:57.799 "is_configured": false, 00:07:57.799 "data_offset": 0, 00:07:57.799 "data_size": 0 00:07:57.799 } 00:07:57.799 ] 00:07:57.799 }' 00:07:57.799 18:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:57.799 18:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.386 18:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:58.386 18:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.386 18:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.386 [2024-12-15 18:38:58.623505] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:58.386 BaseBdev2 00:07:58.386 18:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.386 18:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:58.386 18:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:58.386 18:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:58.386 18:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:58.387 18:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:58.387 18:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:58.387 18:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:58.387 18:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.387 18:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.387 18:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.387 18:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:58.387 18:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.387 18:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.387 [ 00:07:58.387 { 00:07:58.387 "name": "BaseBdev2", 00:07:58.387 "aliases": [ 00:07:58.387 "ca74965f-5310-46bd-9315-72832572f42f" 00:07:58.387 ], 00:07:58.387 "product_name": "Malloc disk", 00:07:58.387 "block_size": 512, 00:07:58.387 "num_blocks": 65536, 00:07:58.387 "uuid": "ca74965f-5310-46bd-9315-72832572f42f", 00:07:58.387 "assigned_rate_limits": { 00:07:58.387 "rw_ios_per_sec": 0, 00:07:58.387 "rw_mbytes_per_sec": 0, 00:07:58.387 "r_mbytes_per_sec": 0, 00:07:58.387 "w_mbytes_per_sec": 0 00:07:58.387 }, 00:07:58.387 "claimed": true, 00:07:58.387 "claim_type": "exclusive_write", 00:07:58.387 "zoned": false, 00:07:58.387 "supported_io_types": { 00:07:58.387 "read": true, 00:07:58.387 "write": true, 00:07:58.387 "unmap": true, 00:07:58.387 "flush": true, 00:07:58.387 "reset": true, 00:07:58.387 "nvme_admin": false, 00:07:58.387 "nvme_io": false, 00:07:58.387 "nvme_io_md": false, 00:07:58.387 "write_zeroes": true, 00:07:58.387 "zcopy": true, 00:07:58.387 "get_zone_info": false, 00:07:58.387 "zone_management": false, 00:07:58.387 "zone_append": false, 00:07:58.387 "compare": false, 00:07:58.387 "compare_and_write": false, 00:07:58.387 "abort": true, 00:07:58.387 "seek_hole": false, 00:07:58.387 "seek_data": false, 00:07:58.387 "copy": true, 00:07:58.387 "nvme_iov_md": false 00:07:58.387 }, 00:07:58.387 "memory_domains": [ 00:07:58.387 { 00:07:58.387 "dma_device_id": "system", 00:07:58.387 "dma_device_type": 1 00:07:58.387 }, 00:07:58.387 { 00:07:58.387 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:58.387 "dma_device_type": 2 00:07:58.387 } 00:07:58.387 ], 00:07:58.387 "driver_specific": {} 00:07:58.387 } 00:07:58.387 ] 00:07:58.387 18:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.387 18:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:58.387 18:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:58.387 18:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:58.387 18:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:07:58.387 18:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:58.387 18:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:58.387 18:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:58.387 18:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:58.387 18:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:58.387 18:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:58.387 18:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:58.387 18:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:58.387 18:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:58.387 18:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:58.387 18:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.387 18:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.387 18:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:58.387 18:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.387 18:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:58.387 "name": "Existed_Raid", 00:07:58.387 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:58.387 "strip_size_kb": 64, 00:07:58.387 "state": "configuring", 00:07:58.387 "raid_level": "concat", 00:07:58.387 "superblock": false, 00:07:58.387 "num_base_bdevs": 3, 00:07:58.387 "num_base_bdevs_discovered": 2, 00:07:58.387 "num_base_bdevs_operational": 3, 00:07:58.387 "base_bdevs_list": [ 00:07:58.387 { 00:07:58.387 "name": "BaseBdev1", 00:07:58.387 "uuid": "c72a36ec-42f1-4d0f-a4c8-ab80117b9364", 00:07:58.387 "is_configured": true, 00:07:58.387 "data_offset": 0, 00:07:58.387 "data_size": 65536 00:07:58.387 }, 00:07:58.387 { 00:07:58.387 "name": "BaseBdev2", 00:07:58.387 "uuid": "ca74965f-5310-46bd-9315-72832572f42f", 00:07:58.387 "is_configured": true, 00:07:58.387 "data_offset": 0, 00:07:58.387 "data_size": 65536 00:07:58.387 }, 00:07:58.387 { 00:07:58.387 "name": "BaseBdev3", 00:07:58.387 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:58.387 "is_configured": false, 00:07:58.387 "data_offset": 0, 00:07:58.387 "data_size": 0 00:07:58.387 } 00:07:58.387 ] 00:07:58.387 }' 00:07:58.387 18:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:58.387 18:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.957 18:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:07:58.957 18:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.957 18:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.957 [2024-12-15 18:38:59.179173] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:07:58.957 [2024-12-15 18:38:59.179248] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:07:58.957 [2024-12-15 18:38:59.179269] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:07:58.957 [2024-12-15 18:38:59.179643] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:07:58.957 [2024-12-15 18:38:59.179879] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:07:58.957 [2024-12-15 18:38:59.179901] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:07:58.957 [2024-12-15 18:38:59.180155] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:58.957 BaseBdev3 00:07:58.957 18:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.957 18:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:07:58.957 18:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:07:58.957 18:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:58.957 18:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:58.957 18:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:58.957 18:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:58.957 18:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:58.957 18:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.957 18:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.957 18:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.957 18:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:07:58.957 18:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.957 18:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.957 [ 00:07:58.957 { 00:07:58.957 "name": "BaseBdev3", 00:07:58.957 "aliases": [ 00:07:58.957 "4c22fb03-dfdf-4a4c-aa5d-8b5d34dbc1af" 00:07:58.957 ], 00:07:58.957 "product_name": "Malloc disk", 00:07:58.957 "block_size": 512, 00:07:58.957 "num_blocks": 65536, 00:07:58.957 "uuid": "4c22fb03-dfdf-4a4c-aa5d-8b5d34dbc1af", 00:07:58.957 "assigned_rate_limits": { 00:07:58.957 "rw_ios_per_sec": 0, 00:07:58.957 "rw_mbytes_per_sec": 0, 00:07:58.957 "r_mbytes_per_sec": 0, 00:07:58.957 "w_mbytes_per_sec": 0 00:07:58.957 }, 00:07:58.957 "claimed": true, 00:07:58.957 "claim_type": "exclusive_write", 00:07:58.957 "zoned": false, 00:07:58.957 "supported_io_types": { 00:07:58.957 "read": true, 00:07:58.957 "write": true, 00:07:58.957 "unmap": true, 00:07:58.957 "flush": true, 00:07:58.957 "reset": true, 00:07:58.957 "nvme_admin": false, 00:07:58.957 "nvme_io": false, 00:07:58.957 "nvme_io_md": false, 00:07:58.957 "write_zeroes": true, 00:07:58.957 "zcopy": true, 00:07:58.957 "get_zone_info": false, 00:07:58.957 "zone_management": false, 00:07:58.957 "zone_append": false, 00:07:58.957 "compare": false, 00:07:58.957 "compare_and_write": false, 00:07:58.957 "abort": true, 00:07:58.957 "seek_hole": false, 00:07:58.957 "seek_data": false, 00:07:58.957 "copy": true, 00:07:58.957 "nvme_iov_md": false 00:07:58.957 }, 00:07:58.957 "memory_domains": [ 00:07:58.957 { 00:07:58.957 "dma_device_id": "system", 00:07:58.957 "dma_device_type": 1 00:07:58.957 }, 00:07:58.957 { 00:07:58.957 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:58.957 "dma_device_type": 2 00:07:58.957 } 00:07:58.957 ], 00:07:58.957 "driver_specific": {} 00:07:58.957 } 00:07:58.957 ] 00:07:58.957 18:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.957 18:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:58.957 18:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:58.957 18:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:58.957 18:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:07:58.957 18:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:58.957 18:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:58.957 18:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:58.957 18:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:58.957 18:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:58.957 18:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:58.957 18:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:58.957 18:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:58.957 18:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:58.957 18:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:58.957 18:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:58.957 18:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.957 18:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.957 18:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.957 18:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:58.957 "name": "Existed_Raid", 00:07:58.957 "uuid": "513dcaaf-3992-42f7-a081-89ae4362c90f", 00:07:58.957 "strip_size_kb": 64, 00:07:58.957 "state": "online", 00:07:58.957 "raid_level": "concat", 00:07:58.957 "superblock": false, 00:07:58.957 "num_base_bdevs": 3, 00:07:58.957 "num_base_bdevs_discovered": 3, 00:07:58.957 "num_base_bdevs_operational": 3, 00:07:58.957 "base_bdevs_list": [ 00:07:58.957 { 00:07:58.957 "name": "BaseBdev1", 00:07:58.957 "uuid": "c72a36ec-42f1-4d0f-a4c8-ab80117b9364", 00:07:58.957 "is_configured": true, 00:07:58.957 "data_offset": 0, 00:07:58.957 "data_size": 65536 00:07:58.957 }, 00:07:58.957 { 00:07:58.957 "name": "BaseBdev2", 00:07:58.957 "uuid": "ca74965f-5310-46bd-9315-72832572f42f", 00:07:58.957 "is_configured": true, 00:07:58.957 "data_offset": 0, 00:07:58.957 "data_size": 65536 00:07:58.958 }, 00:07:58.958 { 00:07:58.958 "name": "BaseBdev3", 00:07:58.958 "uuid": "4c22fb03-dfdf-4a4c-aa5d-8b5d34dbc1af", 00:07:58.958 "is_configured": true, 00:07:58.958 "data_offset": 0, 00:07:58.958 "data_size": 65536 00:07:58.958 } 00:07:58.958 ] 00:07:58.958 }' 00:07:58.958 18:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:58.958 18:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.528 18:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:59.528 18:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:59.528 18:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:59.528 18:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:59.528 18:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:59.528 18:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:59.528 18:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:59.528 18:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:59.528 18:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.528 18:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.528 [2024-12-15 18:38:59.674755] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:59.528 18:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.528 18:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:59.528 "name": "Existed_Raid", 00:07:59.528 "aliases": [ 00:07:59.528 "513dcaaf-3992-42f7-a081-89ae4362c90f" 00:07:59.528 ], 00:07:59.528 "product_name": "Raid Volume", 00:07:59.528 "block_size": 512, 00:07:59.528 "num_blocks": 196608, 00:07:59.528 "uuid": "513dcaaf-3992-42f7-a081-89ae4362c90f", 00:07:59.528 "assigned_rate_limits": { 00:07:59.528 "rw_ios_per_sec": 0, 00:07:59.528 "rw_mbytes_per_sec": 0, 00:07:59.528 "r_mbytes_per_sec": 0, 00:07:59.528 "w_mbytes_per_sec": 0 00:07:59.528 }, 00:07:59.528 "claimed": false, 00:07:59.528 "zoned": false, 00:07:59.528 "supported_io_types": { 00:07:59.528 "read": true, 00:07:59.528 "write": true, 00:07:59.528 "unmap": true, 00:07:59.528 "flush": true, 00:07:59.528 "reset": true, 00:07:59.528 "nvme_admin": false, 00:07:59.529 "nvme_io": false, 00:07:59.529 "nvme_io_md": false, 00:07:59.529 "write_zeroes": true, 00:07:59.529 "zcopy": false, 00:07:59.529 "get_zone_info": false, 00:07:59.529 "zone_management": false, 00:07:59.529 "zone_append": false, 00:07:59.529 "compare": false, 00:07:59.529 "compare_and_write": false, 00:07:59.529 "abort": false, 00:07:59.529 "seek_hole": false, 00:07:59.529 "seek_data": false, 00:07:59.529 "copy": false, 00:07:59.529 "nvme_iov_md": false 00:07:59.529 }, 00:07:59.529 "memory_domains": [ 00:07:59.529 { 00:07:59.529 "dma_device_id": "system", 00:07:59.529 "dma_device_type": 1 00:07:59.529 }, 00:07:59.529 { 00:07:59.529 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:59.529 "dma_device_type": 2 00:07:59.529 }, 00:07:59.529 { 00:07:59.529 "dma_device_id": "system", 00:07:59.529 "dma_device_type": 1 00:07:59.529 }, 00:07:59.529 { 00:07:59.529 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:59.529 "dma_device_type": 2 00:07:59.529 }, 00:07:59.529 { 00:07:59.529 "dma_device_id": "system", 00:07:59.529 "dma_device_type": 1 00:07:59.529 }, 00:07:59.529 { 00:07:59.529 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:59.529 "dma_device_type": 2 00:07:59.529 } 00:07:59.529 ], 00:07:59.529 "driver_specific": { 00:07:59.529 "raid": { 00:07:59.529 "uuid": "513dcaaf-3992-42f7-a081-89ae4362c90f", 00:07:59.529 "strip_size_kb": 64, 00:07:59.529 "state": "online", 00:07:59.529 "raid_level": "concat", 00:07:59.529 "superblock": false, 00:07:59.529 "num_base_bdevs": 3, 00:07:59.529 "num_base_bdevs_discovered": 3, 00:07:59.529 "num_base_bdevs_operational": 3, 00:07:59.529 "base_bdevs_list": [ 00:07:59.529 { 00:07:59.529 "name": "BaseBdev1", 00:07:59.529 "uuid": "c72a36ec-42f1-4d0f-a4c8-ab80117b9364", 00:07:59.529 "is_configured": true, 00:07:59.529 "data_offset": 0, 00:07:59.529 "data_size": 65536 00:07:59.529 }, 00:07:59.529 { 00:07:59.529 "name": "BaseBdev2", 00:07:59.529 "uuid": "ca74965f-5310-46bd-9315-72832572f42f", 00:07:59.529 "is_configured": true, 00:07:59.529 "data_offset": 0, 00:07:59.529 "data_size": 65536 00:07:59.529 }, 00:07:59.529 { 00:07:59.529 "name": "BaseBdev3", 00:07:59.529 "uuid": "4c22fb03-dfdf-4a4c-aa5d-8b5d34dbc1af", 00:07:59.529 "is_configured": true, 00:07:59.529 "data_offset": 0, 00:07:59.529 "data_size": 65536 00:07:59.529 } 00:07:59.529 ] 00:07:59.529 } 00:07:59.529 } 00:07:59.529 }' 00:07:59.529 18:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:59.529 18:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:59.529 BaseBdev2 00:07:59.529 BaseBdev3' 00:07:59.529 18:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:59.529 18:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:59.529 18:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:59.529 18:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:59.529 18:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.529 18:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:59.529 18:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.529 18:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.529 18:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:59.529 18:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:59.529 18:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:59.529 18:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:59.529 18:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.529 18:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.529 18:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:59.529 18:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.529 18:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:59.529 18:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:59.529 18:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:59.529 18:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:07:59.529 18:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:59.529 18:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.529 18:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.529 18:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.529 18:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:59.529 18:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:59.529 18:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:59.529 18:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.529 18:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.529 [2024-12-15 18:38:59.938041] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:59.529 [2024-12-15 18:38:59.938085] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:59.529 [2024-12-15 18:38:59.938156] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:59.529 18:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.529 18:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:59.529 18:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:07:59.529 18:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:59.529 18:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:59.529 18:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:59.529 18:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:07:59.529 18:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:59.529 18:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:59.529 18:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:59.529 18:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:59.529 18:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:59.529 18:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:59.529 18:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:59.529 18:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:59.529 18:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:59.789 18:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:59.789 18:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:59.789 18:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.789 18:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.789 18:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.789 18:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:59.789 "name": "Existed_Raid", 00:07:59.789 "uuid": "513dcaaf-3992-42f7-a081-89ae4362c90f", 00:07:59.789 "strip_size_kb": 64, 00:07:59.789 "state": "offline", 00:07:59.789 "raid_level": "concat", 00:07:59.789 "superblock": false, 00:07:59.789 "num_base_bdevs": 3, 00:07:59.789 "num_base_bdevs_discovered": 2, 00:07:59.789 "num_base_bdevs_operational": 2, 00:07:59.789 "base_bdevs_list": [ 00:07:59.789 { 00:07:59.789 "name": null, 00:07:59.789 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:59.789 "is_configured": false, 00:07:59.789 "data_offset": 0, 00:07:59.789 "data_size": 65536 00:07:59.789 }, 00:07:59.789 { 00:07:59.789 "name": "BaseBdev2", 00:07:59.789 "uuid": "ca74965f-5310-46bd-9315-72832572f42f", 00:07:59.789 "is_configured": true, 00:07:59.789 "data_offset": 0, 00:07:59.789 "data_size": 65536 00:07:59.789 }, 00:07:59.789 { 00:07:59.789 "name": "BaseBdev3", 00:07:59.789 "uuid": "4c22fb03-dfdf-4a4c-aa5d-8b5d34dbc1af", 00:07:59.789 "is_configured": true, 00:07:59.789 "data_offset": 0, 00:07:59.789 "data_size": 65536 00:07:59.789 } 00:07:59.789 ] 00:07:59.789 }' 00:07:59.789 18:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:59.789 18:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.049 18:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:00.049 18:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:00.049 18:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:00.049 18:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.049 18:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.049 18:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:00.049 18:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.049 18:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:00.049 18:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:00.049 18:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:00.049 18:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.049 18:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.049 [2024-12-15 18:39:00.474115] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:00.309 18:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.309 18:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:00.309 18:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:00.309 18:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:00.309 18:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.309 18:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.309 18:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:00.309 18:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.309 18:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:00.309 18:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:00.309 18:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:00.309 18:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.309 18:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.309 [2024-12-15 18:39:00.566638] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:00.309 [2024-12-15 18:39:00.566708] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:08:00.309 18:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.309 18:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:00.309 18:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:00.309 18:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:00.309 18:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:00.309 18:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.309 18:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.309 18:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.309 18:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:00.309 18:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:00.309 18:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:00.309 18:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:00.309 18:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:00.309 18:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:00.309 18:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.309 18:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.309 BaseBdev2 00:08:00.309 18:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.309 18:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:00.309 18:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:00.309 18:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:00.309 18:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:00.309 18:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:00.309 18:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:00.309 18:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:00.309 18:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.309 18:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.309 18:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.309 18:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:00.309 18:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.309 18:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.309 [ 00:08:00.309 { 00:08:00.309 "name": "BaseBdev2", 00:08:00.309 "aliases": [ 00:08:00.309 "de8c19e5-36a7-4e6e-aa35-306b3d081711" 00:08:00.309 ], 00:08:00.309 "product_name": "Malloc disk", 00:08:00.309 "block_size": 512, 00:08:00.309 "num_blocks": 65536, 00:08:00.309 "uuid": "de8c19e5-36a7-4e6e-aa35-306b3d081711", 00:08:00.309 "assigned_rate_limits": { 00:08:00.309 "rw_ios_per_sec": 0, 00:08:00.309 "rw_mbytes_per_sec": 0, 00:08:00.309 "r_mbytes_per_sec": 0, 00:08:00.309 "w_mbytes_per_sec": 0 00:08:00.309 }, 00:08:00.309 "claimed": false, 00:08:00.309 "zoned": false, 00:08:00.309 "supported_io_types": { 00:08:00.309 "read": true, 00:08:00.309 "write": true, 00:08:00.309 "unmap": true, 00:08:00.309 "flush": true, 00:08:00.309 "reset": true, 00:08:00.309 "nvme_admin": false, 00:08:00.309 "nvme_io": false, 00:08:00.309 "nvme_io_md": false, 00:08:00.309 "write_zeroes": true, 00:08:00.309 "zcopy": true, 00:08:00.309 "get_zone_info": false, 00:08:00.309 "zone_management": false, 00:08:00.309 "zone_append": false, 00:08:00.309 "compare": false, 00:08:00.309 "compare_and_write": false, 00:08:00.309 "abort": true, 00:08:00.309 "seek_hole": false, 00:08:00.309 "seek_data": false, 00:08:00.309 "copy": true, 00:08:00.309 "nvme_iov_md": false 00:08:00.309 }, 00:08:00.309 "memory_domains": [ 00:08:00.309 { 00:08:00.309 "dma_device_id": "system", 00:08:00.309 "dma_device_type": 1 00:08:00.309 }, 00:08:00.309 { 00:08:00.309 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:00.310 "dma_device_type": 2 00:08:00.310 } 00:08:00.310 ], 00:08:00.310 "driver_specific": {} 00:08:00.310 } 00:08:00.310 ] 00:08:00.310 18:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.310 18:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:00.310 18:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:00.310 18:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:00.310 18:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:00.310 18:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.310 18:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.310 BaseBdev3 00:08:00.310 18:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.310 18:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:00.310 18:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:00.310 18:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:00.310 18:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:00.310 18:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:00.310 18:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:00.310 18:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:00.310 18:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.310 18:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.310 18:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.310 18:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:00.310 18:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.310 18:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.310 [ 00:08:00.310 { 00:08:00.310 "name": "BaseBdev3", 00:08:00.310 "aliases": [ 00:08:00.310 "c2c27387-e120-4c2d-8a84-610de490b7c1" 00:08:00.310 ], 00:08:00.310 "product_name": "Malloc disk", 00:08:00.310 "block_size": 512, 00:08:00.310 "num_blocks": 65536, 00:08:00.310 "uuid": "c2c27387-e120-4c2d-8a84-610de490b7c1", 00:08:00.310 "assigned_rate_limits": { 00:08:00.310 "rw_ios_per_sec": 0, 00:08:00.310 "rw_mbytes_per_sec": 0, 00:08:00.310 "r_mbytes_per_sec": 0, 00:08:00.310 "w_mbytes_per_sec": 0 00:08:00.310 }, 00:08:00.310 "claimed": false, 00:08:00.310 "zoned": false, 00:08:00.310 "supported_io_types": { 00:08:00.310 "read": true, 00:08:00.310 "write": true, 00:08:00.310 "unmap": true, 00:08:00.310 "flush": true, 00:08:00.310 "reset": true, 00:08:00.310 "nvme_admin": false, 00:08:00.310 "nvme_io": false, 00:08:00.310 "nvme_io_md": false, 00:08:00.310 "write_zeroes": true, 00:08:00.310 "zcopy": true, 00:08:00.310 "get_zone_info": false, 00:08:00.310 "zone_management": false, 00:08:00.310 "zone_append": false, 00:08:00.310 "compare": false, 00:08:00.310 "compare_and_write": false, 00:08:00.310 "abort": true, 00:08:00.310 "seek_hole": false, 00:08:00.310 "seek_data": false, 00:08:00.310 "copy": true, 00:08:00.310 "nvme_iov_md": false 00:08:00.310 }, 00:08:00.310 "memory_domains": [ 00:08:00.570 { 00:08:00.570 "dma_device_id": "system", 00:08:00.570 "dma_device_type": 1 00:08:00.570 }, 00:08:00.570 { 00:08:00.570 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:00.570 "dma_device_type": 2 00:08:00.570 } 00:08:00.570 ], 00:08:00.570 "driver_specific": {} 00:08:00.570 } 00:08:00.570 ] 00:08:00.570 18:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.570 18:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:00.570 18:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:00.570 18:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:00.570 18:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:00.570 18:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.570 18:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.570 [2024-12-15 18:39:00.759846] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:00.570 [2024-12-15 18:39:00.759912] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:00.570 [2024-12-15 18:39:00.759938] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:00.570 [2024-12-15 18:39:00.762149] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:00.570 18:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.570 18:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:00.570 18:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:00.570 18:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:00.570 18:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:00.570 18:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:00.570 18:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:00.570 18:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:00.570 18:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:00.570 18:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:00.570 18:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:00.570 18:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:00.570 18:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:00.570 18:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.570 18:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.570 18:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.570 18:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:00.570 "name": "Existed_Raid", 00:08:00.570 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:00.570 "strip_size_kb": 64, 00:08:00.570 "state": "configuring", 00:08:00.570 "raid_level": "concat", 00:08:00.570 "superblock": false, 00:08:00.570 "num_base_bdevs": 3, 00:08:00.570 "num_base_bdevs_discovered": 2, 00:08:00.570 "num_base_bdevs_operational": 3, 00:08:00.570 "base_bdevs_list": [ 00:08:00.570 { 00:08:00.570 "name": "BaseBdev1", 00:08:00.570 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:00.570 "is_configured": false, 00:08:00.570 "data_offset": 0, 00:08:00.570 "data_size": 0 00:08:00.570 }, 00:08:00.570 { 00:08:00.570 "name": "BaseBdev2", 00:08:00.570 "uuid": "de8c19e5-36a7-4e6e-aa35-306b3d081711", 00:08:00.570 "is_configured": true, 00:08:00.570 "data_offset": 0, 00:08:00.570 "data_size": 65536 00:08:00.570 }, 00:08:00.570 { 00:08:00.570 "name": "BaseBdev3", 00:08:00.570 "uuid": "c2c27387-e120-4c2d-8a84-610de490b7c1", 00:08:00.570 "is_configured": true, 00:08:00.570 "data_offset": 0, 00:08:00.570 "data_size": 65536 00:08:00.570 } 00:08:00.570 ] 00:08:00.570 }' 00:08:00.570 18:39:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:00.570 18:39:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.830 18:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:00.830 18:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.830 18:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.830 [2024-12-15 18:39:01.115300] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:00.830 18:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.830 18:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:00.830 18:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:00.830 18:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:00.830 18:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:00.830 18:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:00.830 18:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:00.830 18:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:00.830 18:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:00.830 18:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:00.830 18:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:00.830 18:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:00.830 18:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:00.830 18:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.830 18:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.830 18:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.830 18:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:00.830 "name": "Existed_Raid", 00:08:00.830 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:00.830 "strip_size_kb": 64, 00:08:00.830 "state": "configuring", 00:08:00.830 "raid_level": "concat", 00:08:00.830 "superblock": false, 00:08:00.830 "num_base_bdevs": 3, 00:08:00.830 "num_base_bdevs_discovered": 1, 00:08:00.830 "num_base_bdevs_operational": 3, 00:08:00.830 "base_bdevs_list": [ 00:08:00.830 { 00:08:00.830 "name": "BaseBdev1", 00:08:00.830 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:00.830 "is_configured": false, 00:08:00.830 "data_offset": 0, 00:08:00.830 "data_size": 0 00:08:00.830 }, 00:08:00.830 { 00:08:00.830 "name": null, 00:08:00.830 "uuid": "de8c19e5-36a7-4e6e-aa35-306b3d081711", 00:08:00.830 "is_configured": false, 00:08:00.830 "data_offset": 0, 00:08:00.830 "data_size": 65536 00:08:00.830 }, 00:08:00.830 { 00:08:00.830 "name": "BaseBdev3", 00:08:00.830 "uuid": "c2c27387-e120-4c2d-8a84-610de490b7c1", 00:08:00.830 "is_configured": true, 00:08:00.830 "data_offset": 0, 00:08:00.830 "data_size": 65536 00:08:00.830 } 00:08:00.830 ] 00:08:00.830 }' 00:08:00.830 18:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:00.830 18:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.399 18:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:01.399 18:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.399 18:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.399 18:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:01.399 18:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.399 18:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:01.399 18:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:01.399 18:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.400 18:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.400 [2024-12-15 18:39:01.611101] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:01.400 BaseBdev1 00:08:01.400 18:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.400 18:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:01.400 18:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:01.400 18:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:01.400 18:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:01.400 18:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:01.400 18:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:01.400 18:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:01.400 18:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.400 18:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.400 18:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.400 18:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:01.400 18:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.400 18:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.400 [ 00:08:01.400 { 00:08:01.400 "name": "BaseBdev1", 00:08:01.400 "aliases": [ 00:08:01.400 "2930cc7c-ed00-4a60-a7e5-c48a37b4291f" 00:08:01.400 ], 00:08:01.400 "product_name": "Malloc disk", 00:08:01.400 "block_size": 512, 00:08:01.400 "num_blocks": 65536, 00:08:01.400 "uuid": "2930cc7c-ed00-4a60-a7e5-c48a37b4291f", 00:08:01.400 "assigned_rate_limits": { 00:08:01.400 "rw_ios_per_sec": 0, 00:08:01.400 "rw_mbytes_per_sec": 0, 00:08:01.400 "r_mbytes_per_sec": 0, 00:08:01.400 "w_mbytes_per_sec": 0 00:08:01.400 }, 00:08:01.400 "claimed": true, 00:08:01.400 "claim_type": "exclusive_write", 00:08:01.400 "zoned": false, 00:08:01.400 "supported_io_types": { 00:08:01.400 "read": true, 00:08:01.400 "write": true, 00:08:01.400 "unmap": true, 00:08:01.400 "flush": true, 00:08:01.400 "reset": true, 00:08:01.400 "nvme_admin": false, 00:08:01.400 "nvme_io": false, 00:08:01.400 "nvme_io_md": false, 00:08:01.400 "write_zeroes": true, 00:08:01.400 "zcopy": true, 00:08:01.400 "get_zone_info": false, 00:08:01.400 "zone_management": false, 00:08:01.400 "zone_append": false, 00:08:01.400 "compare": false, 00:08:01.400 "compare_and_write": false, 00:08:01.400 "abort": true, 00:08:01.400 "seek_hole": false, 00:08:01.400 "seek_data": false, 00:08:01.400 "copy": true, 00:08:01.400 "nvme_iov_md": false 00:08:01.400 }, 00:08:01.400 "memory_domains": [ 00:08:01.400 { 00:08:01.400 "dma_device_id": "system", 00:08:01.400 "dma_device_type": 1 00:08:01.400 }, 00:08:01.400 { 00:08:01.400 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:01.400 "dma_device_type": 2 00:08:01.400 } 00:08:01.400 ], 00:08:01.400 "driver_specific": {} 00:08:01.400 } 00:08:01.400 ] 00:08:01.400 18:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.400 18:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:01.400 18:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:01.400 18:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:01.400 18:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:01.400 18:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:01.400 18:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:01.400 18:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:01.400 18:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:01.400 18:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:01.400 18:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:01.400 18:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:01.400 18:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:01.400 18:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:01.400 18:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.400 18:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.400 18:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.400 18:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:01.400 "name": "Existed_Raid", 00:08:01.400 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:01.400 "strip_size_kb": 64, 00:08:01.400 "state": "configuring", 00:08:01.400 "raid_level": "concat", 00:08:01.400 "superblock": false, 00:08:01.400 "num_base_bdevs": 3, 00:08:01.400 "num_base_bdevs_discovered": 2, 00:08:01.400 "num_base_bdevs_operational": 3, 00:08:01.400 "base_bdevs_list": [ 00:08:01.400 { 00:08:01.400 "name": "BaseBdev1", 00:08:01.400 "uuid": "2930cc7c-ed00-4a60-a7e5-c48a37b4291f", 00:08:01.400 "is_configured": true, 00:08:01.400 "data_offset": 0, 00:08:01.400 "data_size": 65536 00:08:01.400 }, 00:08:01.400 { 00:08:01.400 "name": null, 00:08:01.400 "uuid": "de8c19e5-36a7-4e6e-aa35-306b3d081711", 00:08:01.400 "is_configured": false, 00:08:01.400 "data_offset": 0, 00:08:01.400 "data_size": 65536 00:08:01.400 }, 00:08:01.400 { 00:08:01.400 "name": "BaseBdev3", 00:08:01.400 "uuid": "c2c27387-e120-4c2d-8a84-610de490b7c1", 00:08:01.400 "is_configured": true, 00:08:01.400 "data_offset": 0, 00:08:01.400 "data_size": 65536 00:08:01.400 } 00:08:01.400 ] 00:08:01.400 }' 00:08:01.400 18:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:01.400 18:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.659 18:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:01.659 18:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.659 18:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:01.659 18:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.659 18:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.659 18:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:01.659 18:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:01.660 18:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.660 18:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.919 [2024-12-15 18:39:02.102327] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:01.919 18:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.919 18:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:01.919 18:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:01.919 18:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:01.919 18:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:01.919 18:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:01.919 18:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:01.919 18:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:01.919 18:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:01.919 18:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:01.919 18:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:01.919 18:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:01.919 18:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:01.919 18:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.919 18:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.919 18:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.919 18:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:01.919 "name": "Existed_Raid", 00:08:01.919 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:01.919 "strip_size_kb": 64, 00:08:01.919 "state": "configuring", 00:08:01.919 "raid_level": "concat", 00:08:01.919 "superblock": false, 00:08:01.919 "num_base_bdevs": 3, 00:08:01.919 "num_base_bdevs_discovered": 1, 00:08:01.919 "num_base_bdevs_operational": 3, 00:08:01.919 "base_bdevs_list": [ 00:08:01.919 { 00:08:01.919 "name": "BaseBdev1", 00:08:01.919 "uuid": "2930cc7c-ed00-4a60-a7e5-c48a37b4291f", 00:08:01.919 "is_configured": true, 00:08:01.919 "data_offset": 0, 00:08:01.919 "data_size": 65536 00:08:01.919 }, 00:08:01.919 { 00:08:01.919 "name": null, 00:08:01.919 "uuid": "de8c19e5-36a7-4e6e-aa35-306b3d081711", 00:08:01.919 "is_configured": false, 00:08:01.919 "data_offset": 0, 00:08:01.919 "data_size": 65536 00:08:01.919 }, 00:08:01.919 { 00:08:01.919 "name": null, 00:08:01.919 "uuid": "c2c27387-e120-4c2d-8a84-610de490b7c1", 00:08:01.919 "is_configured": false, 00:08:01.919 "data_offset": 0, 00:08:01.919 "data_size": 65536 00:08:01.919 } 00:08:01.919 ] 00:08:01.919 }' 00:08:01.919 18:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:01.919 18:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.179 18:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:02.179 18:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.179 18:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:02.179 18:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.179 18:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.179 18:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:02.179 18:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:02.179 18:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.179 18:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.179 [2024-12-15 18:39:02.549597] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:02.179 18:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.179 18:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:02.179 18:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:02.179 18:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:02.179 18:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:02.179 18:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:02.179 18:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:02.179 18:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:02.179 18:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:02.179 18:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:02.179 18:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:02.179 18:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:02.179 18:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:02.179 18:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.179 18:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.179 18:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.179 18:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:02.179 "name": "Existed_Raid", 00:08:02.179 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:02.179 "strip_size_kb": 64, 00:08:02.179 "state": "configuring", 00:08:02.179 "raid_level": "concat", 00:08:02.179 "superblock": false, 00:08:02.179 "num_base_bdevs": 3, 00:08:02.179 "num_base_bdevs_discovered": 2, 00:08:02.179 "num_base_bdevs_operational": 3, 00:08:02.179 "base_bdevs_list": [ 00:08:02.179 { 00:08:02.179 "name": "BaseBdev1", 00:08:02.179 "uuid": "2930cc7c-ed00-4a60-a7e5-c48a37b4291f", 00:08:02.179 "is_configured": true, 00:08:02.179 "data_offset": 0, 00:08:02.179 "data_size": 65536 00:08:02.179 }, 00:08:02.179 { 00:08:02.179 "name": null, 00:08:02.179 "uuid": "de8c19e5-36a7-4e6e-aa35-306b3d081711", 00:08:02.179 "is_configured": false, 00:08:02.179 "data_offset": 0, 00:08:02.179 "data_size": 65536 00:08:02.179 }, 00:08:02.179 { 00:08:02.179 "name": "BaseBdev3", 00:08:02.180 "uuid": "c2c27387-e120-4c2d-8a84-610de490b7c1", 00:08:02.180 "is_configured": true, 00:08:02.180 "data_offset": 0, 00:08:02.180 "data_size": 65536 00:08:02.180 } 00:08:02.180 ] 00:08:02.180 }' 00:08:02.180 18:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:02.180 18:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.749 18:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:02.749 18:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:02.749 18:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.749 18:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.749 18:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.749 18:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:02.749 18:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:02.749 18:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.749 18:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.749 [2024-12-15 18:39:02.972985] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:02.749 18:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.749 18:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:02.749 18:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:02.749 18:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:02.749 18:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:02.749 18:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:02.749 18:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:02.749 18:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:02.749 18:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:02.749 18:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:02.749 18:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:02.749 18:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:02.749 18:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.749 18:39:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.749 18:39:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:02.749 18:39:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.749 18:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:02.750 "name": "Existed_Raid", 00:08:02.750 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:02.750 "strip_size_kb": 64, 00:08:02.750 "state": "configuring", 00:08:02.750 "raid_level": "concat", 00:08:02.750 "superblock": false, 00:08:02.750 "num_base_bdevs": 3, 00:08:02.750 "num_base_bdevs_discovered": 1, 00:08:02.750 "num_base_bdevs_operational": 3, 00:08:02.750 "base_bdevs_list": [ 00:08:02.750 { 00:08:02.750 "name": null, 00:08:02.750 "uuid": "2930cc7c-ed00-4a60-a7e5-c48a37b4291f", 00:08:02.750 "is_configured": false, 00:08:02.750 "data_offset": 0, 00:08:02.750 "data_size": 65536 00:08:02.750 }, 00:08:02.750 { 00:08:02.750 "name": null, 00:08:02.750 "uuid": "de8c19e5-36a7-4e6e-aa35-306b3d081711", 00:08:02.750 "is_configured": false, 00:08:02.750 "data_offset": 0, 00:08:02.750 "data_size": 65536 00:08:02.750 }, 00:08:02.750 { 00:08:02.750 "name": "BaseBdev3", 00:08:02.750 "uuid": "c2c27387-e120-4c2d-8a84-610de490b7c1", 00:08:02.750 "is_configured": true, 00:08:02.750 "data_offset": 0, 00:08:02.750 "data_size": 65536 00:08:02.750 } 00:08:02.750 ] 00:08:02.750 }' 00:08:02.750 18:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:02.750 18:39:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.010 18:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:03.010 18:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:03.010 18:39:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.010 18:39:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.010 18:39:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.010 18:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:03.010 18:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:03.010 18:39:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.010 18:39:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.010 [2024-12-15 18:39:03.436438] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:03.010 18:39:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.010 18:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:03.010 18:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:03.010 18:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:03.010 18:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:03.010 18:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:03.010 18:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:03.010 18:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:03.010 18:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:03.010 18:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:03.010 18:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:03.010 18:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:03.010 18:39:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.010 18:39:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.010 18:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:03.269 18:39:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.269 18:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:03.270 "name": "Existed_Raid", 00:08:03.270 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:03.270 "strip_size_kb": 64, 00:08:03.270 "state": "configuring", 00:08:03.270 "raid_level": "concat", 00:08:03.270 "superblock": false, 00:08:03.270 "num_base_bdevs": 3, 00:08:03.270 "num_base_bdevs_discovered": 2, 00:08:03.270 "num_base_bdevs_operational": 3, 00:08:03.270 "base_bdevs_list": [ 00:08:03.270 { 00:08:03.270 "name": null, 00:08:03.270 "uuid": "2930cc7c-ed00-4a60-a7e5-c48a37b4291f", 00:08:03.270 "is_configured": false, 00:08:03.270 "data_offset": 0, 00:08:03.270 "data_size": 65536 00:08:03.270 }, 00:08:03.270 { 00:08:03.270 "name": "BaseBdev2", 00:08:03.270 "uuid": "de8c19e5-36a7-4e6e-aa35-306b3d081711", 00:08:03.270 "is_configured": true, 00:08:03.270 "data_offset": 0, 00:08:03.270 "data_size": 65536 00:08:03.270 }, 00:08:03.270 { 00:08:03.270 "name": "BaseBdev3", 00:08:03.270 "uuid": "c2c27387-e120-4c2d-8a84-610de490b7c1", 00:08:03.270 "is_configured": true, 00:08:03.270 "data_offset": 0, 00:08:03.270 "data_size": 65536 00:08:03.270 } 00:08:03.270 ] 00:08:03.270 }' 00:08:03.270 18:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:03.270 18:39:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.529 18:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:03.529 18:39:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.529 18:39:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.529 18:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:03.529 18:39:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.529 18:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:03.529 18:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:03.529 18:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:03.529 18:39:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.529 18:39:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.529 18:39:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.529 18:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 2930cc7c-ed00-4a60-a7e5-c48a37b4291f 00:08:03.529 18:39:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.529 18:39:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.529 [2024-12-15 18:39:03.932644] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:03.529 [2024-12-15 18:39:03.932696] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:08:03.529 [2024-12-15 18:39:03.932707] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:03.529 [2024-12-15 18:39:03.933003] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:08:03.529 [2024-12-15 18:39:03.933154] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:08:03.529 [2024-12-15 18:39:03.933170] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:08:03.529 [2024-12-15 18:39:03.933378] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:03.529 NewBaseBdev 00:08:03.529 18:39:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.529 18:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:03.529 18:39:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:08:03.529 18:39:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:03.529 18:39:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:03.529 18:39:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:03.529 18:39:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:03.529 18:39:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:03.529 18:39:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.529 18:39:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.529 18:39:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.529 18:39:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:03.529 18:39:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.529 18:39:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.529 [ 00:08:03.529 { 00:08:03.529 "name": "NewBaseBdev", 00:08:03.529 "aliases": [ 00:08:03.529 "2930cc7c-ed00-4a60-a7e5-c48a37b4291f" 00:08:03.529 ], 00:08:03.529 "product_name": "Malloc disk", 00:08:03.529 "block_size": 512, 00:08:03.529 "num_blocks": 65536, 00:08:03.529 "uuid": "2930cc7c-ed00-4a60-a7e5-c48a37b4291f", 00:08:03.529 "assigned_rate_limits": { 00:08:03.529 "rw_ios_per_sec": 0, 00:08:03.529 "rw_mbytes_per_sec": 0, 00:08:03.529 "r_mbytes_per_sec": 0, 00:08:03.529 "w_mbytes_per_sec": 0 00:08:03.529 }, 00:08:03.529 "claimed": true, 00:08:03.529 "claim_type": "exclusive_write", 00:08:03.529 "zoned": false, 00:08:03.529 "supported_io_types": { 00:08:03.529 "read": true, 00:08:03.789 "write": true, 00:08:03.789 "unmap": true, 00:08:03.789 "flush": true, 00:08:03.789 "reset": true, 00:08:03.789 "nvme_admin": false, 00:08:03.789 "nvme_io": false, 00:08:03.789 "nvme_io_md": false, 00:08:03.789 "write_zeroes": true, 00:08:03.789 "zcopy": true, 00:08:03.789 "get_zone_info": false, 00:08:03.789 "zone_management": false, 00:08:03.789 "zone_append": false, 00:08:03.789 "compare": false, 00:08:03.789 "compare_and_write": false, 00:08:03.789 "abort": true, 00:08:03.789 "seek_hole": false, 00:08:03.789 "seek_data": false, 00:08:03.789 "copy": true, 00:08:03.789 "nvme_iov_md": false 00:08:03.789 }, 00:08:03.789 "memory_domains": [ 00:08:03.789 { 00:08:03.789 "dma_device_id": "system", 00:08:03.789 "dma_device_type": 1 00:08:03.789 }, 00:08:03.789 { 00:08:03.789 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:03.789 "dma_device_type": 2 00:08:03.789 } 00:08:03.789 ], 00:08:03.789 "driver_specific": {} 00:08:03.789 } 00:08:03.789 ] 00:08:03.789 18:39:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.789 18:39:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:03.789 18:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:08:03.789 18:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:03.789 18:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:03.789 18:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:03.789 18:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:03.789 18:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:03.789 18:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:03.789 18:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:03.789 18:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:03.789 18:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:03.789 18:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:03.789 18:39:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:03.789 18:39:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.789 18:39:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.789 18:39:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.789 18:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:03.789 "name": "Existed_Raid", 00:08:03.789 "uuid": "54680b2b-26b5-4669-9c56-97204c5a8dc3", 00:08:03.789 "strip_size_kb": 64, 00:08:03.789 "state": "online", 00:08:03.789 "raid_level": "concat", 00:08:03.789 "superblock": false, 00:08:03.789 "num_base_bdevs": 3, 00:08:03.789 "num_base_bdevs_discovered": 3, 00:08:03.789 "num_base_bdevs_operational": 3, 00:08:03.789 "base_bdevs_list": [ 00:08:03.789 { 00:08:03.789 "name": "NewBaseBdev", 00:08:03.789 "uuid": "2930cc7c-ed00-4a60-a7e5-c48a37b4291f", 00:08:03.789 "is_configured": true, 00:08:03.789 "data_offset": 0, 00:08:03.789 "data_size": 65536 00:08:03.789 }, 00:08:03.789 { 00:08:03.789 "name": "BaseBdev2", 00:08:03.789 "uuid": "de8c19e5-36a7-4e6e-aa35-306b3d081711", 00:08:03.789 "is_configured": true, 00:08:03.789 "data_offset": 0, 00:08:03.789 "data_size": 65536 00:08:03.789 }, 00:08:03.789 { 00:08:03.789 "name": "BaseBdev3", 00:08:03.789 "uuid": "c2c27387-e120-4c2d-8a84-610de490b7c1", 00:08:03.789 "is_configured": true, 00:08:03.789 "data_offset": 0, 00:08:03.789 "data_size": 65536 00:08:03.789 } 00:08:03.789 ] 00:08:03.789 }' 00:08:03.789 18:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:03.789 18:39:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.049 18:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:04.049 18:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:04.049 18:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:04.049 18:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:04.049 18:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:04.049 18:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:04.049 18:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:04.049 18:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:04.049 18:39:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.049 18:39:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.049 [2024-12-15 18:39:04.380351] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:04.049 18:39:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.049 18:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:04.049 "name": "Existed_Raid", 00:08:04.049 "aliases": [ 00:08:04.049 "54680b2b-26b5-4669-9c56-97204c5a8dc3" 00:08:04.049 ], 00:08:04.049 "product_name": "Raid Volume", 00:08:04.049 "block_size": 512, 00:08:04.049 "num_blocks": 196608, 00:08:04.049 "uuid": "54680b2b-26b5-4669-9c56-97204c5a8dc3", 00:08:04.049 "assigned_rate_limits": { 00:08:04.049 "rw_ios_per_sec": 0, 00:08:04.049 "rw_mbytes_per_sec": 0, 00:08:04.049 "r_mbytes_per_sec": 0, 00:08:04.049 "w_mbytes_per_sec": 0 00:08:04.049 }, 00:08:04.049 "claimed": false, 00:08:04.049 "zoned": false, 00:08:04.049 "supported_io_types": { 00:08:04.049 "read": true, 00:08:04.049 "write": true, 00:08:04.049 "unmap": true, 00:08:04.049 "flush": true, 00:08:04.049 "reset": true, 00:08:04.049 "nvme_admin": false, 00:08:04.049 "nvme_io": false, 00:08:04.049 "nvme_io_md": false, 00:08:04.049 "write_zeroes": true, 00:08:04.049 "zcopy": false, 00:08:04.049 "get_zone_info": false, 00:08:04.049 "zone_management": false, 00:08:04.049 "zone_append": false, 00:08:04.049 "compare": false, 00:08:04.049 "compare_and_write": false, 00:08:04.049 "abort": false, 00:08:04.049 "seek_hole": false, 00:08:04.049 "seek_data": false, 00:08:04.049 "copy": false, 00:08:04.049 "nvme_iov_md": false 00:08:04.049 }, 00:08:04.049 "memory_domains": [ 00:08:04.049 { 00:08:04.049 "dma_device_id": "system", 00:08:04.049 "dma_device_type": 1 00:08:04.049 }, 00:08:04.049 { 00:08:04.049 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:04.049 "dma_device_type": 2 00:08:04.049 }, 00:08:04.049 { 00:08:04.049 "dma_device_id": "system", 00:08:04.049 "dma_device_type": 1 00:08:04.049 }, 00:08:04.049 { 00:08:04.049 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:04.049 "dma_device_type": 2 00:08:04.049 }, 00:08:04.049 { 00:08:04.049 "dma_device_id": "system", 00:08:04.049 "dma_device_type": 1 00:08:04.049 }, 00:08:04.049 { 00:08:04.049 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:04.049 "dma_device_type": 2 00:08:04.049 } 00:08:04.049 ], 00:08:04.049 "driver_specific": { 00:08:04.049 "raid": { 00:08:04.049 "uuid": "54680b2b-26b5-4669-9c56-97204c5a8dc3", 00:08:04.050 "strip_size_kb": 64, 00:08:04.050 "state": "online", 00:08:04.050 "raid_level": "concat", 00:08:04.050 "superblock": false, 00:08:04.050 "num_base_bdevs": 3, 00:08:04.050 "num_base_bdevs_discovered": 3, 00:08:04.050 "num_base_bdevs_operational": 3, 00:08:04.050 "base_bdevs_list": [ 00:08:04.050 { 00:08:04.050 "name": "NewBaseBdev", 00:08:04.050 "uuid": "2930cc7c-ed00-4a60-a7e5-c48a37b4291f", 00:08:04.050 "is_configured": true, 00:08:04.050 "data_offset": 0, 00:08:04.050 "data_size": 65536 00:08:04.050 }, 00:08:04.050 { 00:08:04.050 "name": "BaseBdev2", 00:08:04.050 "uuid": "de8c19e5-36a7-4e6e-aa35-306b3d081711", 00:08:04.050 "is_configured": true, 00:08:04.050 "data_offset": 0, 00:08:04.050 "data_size": 65536 00:08:04.050 }, 00:08:04.050 { 00:08:04.050 "name": "BaseBdev3", 00:08:04.050 "uuid": "c2c27387-e120-4c2d-8a84-610de490b7c1", 00:08:04.050 "is_configured": true, 00:08:04.050 "data_offset": 0, 00:08:04.050 "data_size": 65536 00:08:04.050 } 00:08:04.050 ] 00:08:04.050 } 00:08:04.050 } 00:08:04.050 }' 00:08:04.050 18:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:04.050 18:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:04.050 BaseBdev2 00:08:04.050 BaseBdev3' 00:08:04.050 18:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:04.050 18:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:04.050 18:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:04.050 18:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:04.050 18:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:04.050 18:39:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.050 18:39:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.310 18:39:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.310 18:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:04.310 18:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:04.310 18:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:04.310 18:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:04.310 18:39:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.310 18:39:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.310 18:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:04.310 18:39:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.310 18:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:04.310 18:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:04.310 18:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:04.310 18:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:04.310 18:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:04.310 18:39:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.310 18:39:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.310 18:39:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.310 18:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:04.310 18:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:04.310 18:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:04.310 18:39:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.310 18:39:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.310 [2024-12-15 18:39:04.631482] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:04.310 [2024-12-15 18:39:04.631532] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:04.310 [2024-12-15 18:39:04.631622] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:04.310 [2024-12-15 18:39:04.631686] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:04.310 [2024-12-15 18:39:04.631709] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:08:04.310 18:39:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.310 18:39:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 78680 00:08:04.310 18:39:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 78680 ']' 00:08:04.310 18:39:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 78680 00:08:04.310 18:39:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:08:04.310 18:39:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:04.310 18:39:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78680 00:08:04.310 18:39:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:04.310 18:39:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:04.310 killing process with pid 78680 00:08:04.310 18:39:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78680' 00:08:04.310 18:39:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 78680 00:08:04.310 [2024-12-15 18:39:04.672444] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:04.310 18:39:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 78680 00:08:04.310 [2024-12-15 18:39:04.732266] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:04.882 18:39:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:04.882 00:08:04.882 real 0m8.768s 00:08:04.882 user 0m14.702s 00:08:04.882 sys 0m1.845s 00:08:04.882 18:39:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:04.882 18:39:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.882 ************************************ 00:08:04.882 END TEST raid_state_function_test 00:08:04.882 ************************************ 00:08:04.882 18:39:05 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:08:04.882 18:39:05 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:04.882 18:39:05 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:04.882 18:39:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:04.882 ************************************ 00:08:04.882 START TEST raid_state_function_test_sb 00:08:04.882 ************************************ 00:08:04.882 18:39:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 true 00:08:04.882 18:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:08:04.882 18:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:04.882 18:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:04.882 18:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:04.882 18:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:04.882 18:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:04.882 18:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:04.882 18:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:04.882 18:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:04.882 18:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:04.882 18:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:04.882 18:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:04.882 18:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:04.882 18:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:04.882 18:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:04.882 18:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:04.882 18:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:04.882 18:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:04.882 18:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:04.882 18:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:04.882 18:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:04.882 18:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:08:04.882 18:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:04.882 18:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:04.882 18:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:04.882 18:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:04.882 18:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=79285 00:08:04.882 18:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:04.882 Process raid pid: 79285 00:08:04.882 18:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 79285' 00:08:04.883 18:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 79285 00:08:04.883 18:39:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 79285 ']' 00:08:04.883 18:39:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:04.883 18:39:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:04.883 18:39:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:04.883 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:04.883 18:39:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:04.883 18:39:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:04.883 [2024-12-15 18:39:05.222061] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:08:04.883 [2024-12-15 18:39:05.222198] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:05.152 [2024-12-15 18:39:05.398641] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:05.152 [2024-12-15 18:39:05.437865] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.152 [2024-12-15 18:39:05.514076] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:05.152 [2024-12-15 18:39:05.514118] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:05.730 18:39:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:05.730 18:39:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:08:05.730 18:39:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:05.730 18:39:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.730 18:39:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:05.730 [2024-12-15 18:39:06.100205] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:05.730 [2024-12-15 18:39:06.100278] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:05.730 [2024-12-15 18:39:06.100304] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:05.730 [2024-12-15 18:39:06.100316] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:05.730 [2024-12-15 18:39:06.100327] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:05.730 [2024-12-15 18:39:06.100340] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:05.730 18:39:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.730 18:39:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:05.730 18:39:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:05.731 18:39:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:05.731 18:39:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:05.731 18:39:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:05.731 18:39:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:05.731 18:39:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:05.731 18:39:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:05.731 18:39:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:05.731 18:39:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:05.731 18:39:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:05.731 18:39:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:05.731 18:39:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.731 18:39:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:05.731 18:39:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.731 18:39:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:05.731 "name": "Existed_Raid", 00:08:05.731 "uuid": "397caeec-eee7-494c-be1d-e3d6efc02ab1", 00:08:05.731 "strip_size_kb": 64, 00:08:05.731 "state": "configuring", 00:08:05.731 "raid_level": "concat", 00:08:05.731 "superblock": true, 00:08:05.731 "num_base_bdevs": 3, 00:08:05.731 "num_base_bdevs_discovered": 0, 00:08:05.731 "num_base_bdevs_operational": 3, 00:08:05.731 "base_bdevs_list": [ 00:08:05.731 { 00:08:05.731 "name": "BaseBdev1", 00:08:05.731 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:05.731 "is_configured": false, 00:08:05.731 "data_offset": 0, 00:08:05.731 "data_size": 0 00:08:05.731 }, 00:08:05.731 { 00:08:05.731 "name": "BaseBdev2", 00:08:05.731 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:05.731 "is_configured": false, 00:08:05.731 "data_offset": 0, 00:08:05.731 "data_size": 0 00:08:05.731 }, 00:08:05.731 { 00:08:05.731 "name": "BaseBdev3", 00:08:05.731 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:05.731 "is_configured": false, 00:08:05.731 "data_offset": 0, 00:08:05.731 "data_size": 0 00:08:05.731 } 00:08:05.731 ] 00:08:05.731 }' 00:08:05.731 18:39:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:05.731 18:39:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.300 18:39:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:06.301 18:39:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.301 18:39:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.301 [2024-12-15 18:39:06.551343] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:06.301 [2024-12-15 18:39:06.551405] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:08:06.301 18:39:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.301 18:39:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:06.301 18:39:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.301 18:39:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.301 [2024-12-15 18:39:06.563312] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:06.301 [2024-12-15 18:39:06.563356] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:06.301 [2024-12-15 18:39:06.563365] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:06.301 [2024-12-15 18:39:06.563374] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:06.301 [2024-12-15 18:39:06.563381] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:06.301 [2024-12-15 18:39:06.563390] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:06.301 18:39:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.301 18:39:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:06.301 18:39:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.301 18:39:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.301 [2024-12-15 18:39:06.590209] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:06.301 BaseBdev1 00:08:06.301 18:39:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.301 18:39:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:06.301 18:39:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:06.301 18:39:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:06.301 18:39:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:06.301 18:39:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:06.301 18:39:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:06.301 18:39:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:06.301 18:39:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.301 18:39:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.301 18:39:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.301 18:39:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:06.301 18:39:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.301 18:39:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.301 [ 00:08:06.301 { 00:08:06.301 "name": "BaseBdev1", 00:08:06.301 "aliases": [ 00:08:06.301 "0abc2bef-44fb-44dc-be76-0c87e9c33b36" 00:08:06.301 ], 00:08:06.301 "product_name": "Malloc disk", 00:08:06.301 "block_size": 512, 00:08:06.301 "num_blocks": 65536, 00:08:06.301 "uuid": "0abc2bef-44fb-44dc-be76-0c87e9c33b36", 00:08:06.301 "assigned_rate_limits": { 00:08:06.301 "rw_ios_per_sec": 0, 00:08:06.301 "rw_mbytes_per_sec": 0, 00:08:06.301 "r_mbytes_per_sec": 0, 00:08:06.301 "w_mbytes_per_sec": 0 00:08:06.301 }, 00:08:06.301 "claimed": true, 00:08:06.301 "claim_type": "exclusive_write", 00:08:06.301 "zoned": false, 00:08:06.301 "supported_io_types": { 00:08:06.301 "read": true, 00:08:06.301 "write": true, 00:08:06.301 "unmap": true, 00:08:06.301 "flush": true, 00:08:06.301 "reset": true, 00:08:06.301 "nvme_admin": false, 00:08:06.301 "nvme_io": false, 00:08:06.301 "nvme_io_md": false, 00:08:06.301 "write_zeroes": true, 00:08:06.301 "zcopy": true, 00:08:06.301 "get_zone_info": false, 00:08:06.301 "zone_management": false, 00:08:06.301 "zone_append": false, 00:08:06.301 "compare": false, 00:08:06.301 "compare_and_write": false, 00:08:06.301 "abort": true, 00:08:06.301 "seek_hole": false, 00:08:06.301 "seek_data": false, 00:08:06.301 "copy": true, 00:08:06.301 "nvme_iov_md": false 00:08:06.301 }, 00:08:06.301 "memory_domains": [ 00:08:06.301 { 00:08:06.301 "dma_device_id": "system", 00:08:06.301 "dma_device_type": 1 00:08:06.301 }, 00:08:06.301 { 00:08:06.301 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:06.301 "dma_device_type": 2 00:08:06.301 } 00:08:06.301 ], 00:08:06.301 "driver_specific": {} 00:08:06.301 } 00:08:06.301 ] 00:08:06.301 18:39:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.301 18:39:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:06.301 18:39:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:06.301 18:39:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:06.301 18:39:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:06.301 18:39:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:06.301 18:39:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:06.301 18:39:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:06.301 18:39:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:06.301 18:39:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:06.301 18:39:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:06.301 18:39:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:06.301 18:39:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:06.301 18:39:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:06.301 18:39:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.301 18:39:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.301 18:39:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.301 18:39:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:06.301 "name": "Existed_Raid", 00:08:06.301 "uuid": "a0518c25-15cb-49ee-967d-def4a0b33cbd", 00:08:06.301 "strip_size_kb": 64, 00:08:06.301 "state": "configuring", 00:08:06.301 "raid_level": "concat", 00:08:06.301 "superblock": true, 00:08:06.301 "num_base_bdevs": 3, 00:08:06.301 "num_base_bdevs_discovered": 1, 00:08:06.301 "num_base_bdevs_operational": 3, 00:08:06.301 "base_bdevs_list": [ 00:08:06.301 { 00:08:06.301 "name": "BaseBdev1", 00:08:06.301 "uuid": "0abc2bef-44fb-44dc-be76-0c87e9c33b36", 00:08:06.301 "is_configured": true, 00:08:06.301 "data_offset": 2048, 00:08:06.301 "data_size": 63488 00:08:06.301 }, 00:08:06.301 { 00:08:06.301 "name": "BaseBdev2", 00:08:06.301 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:06.301 "is_configured": false, 00:08:06.301 "data_offset": 0, 00:08:06.301 "data_size": 0 00:08:06.301 }, 00:08:06.301 { 00:08:06.301 "name": "BaseBdev3", 00:08:06.301 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:06.301 "is_configured": false, 00:08:06.301 "data_offset": 0, 00:08:06.301 "data_size": 0 00:08:06.301 } 00:08:06.301 ] 00:08:06.301 }' 00:08:06.301 18:39:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:06.301 18:39:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.870 18:39:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:06.870 18:39:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.870 18:39:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.870 [2024-12-15 18:39:07.025470] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:06.870 [2024-12-15 18:39:07.025536] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:08:06.870 18:39:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.870 18:39:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:06.870 18:39:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.870 18:39:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.870 [2024-12-15 18:39:07.037513] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:06.870 [2024-12-15 18:39:07.039623] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:06.870 [2024-12-15 18:39:07.039672] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:06.870 [2024-12-15 18:39:07.039682] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:06.870 [2024-12-15 18:39:07.039692] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:06.870 18:39:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.870 18:39:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:06.870 18:39:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:06.870 18:39:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:06.870 18:39:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:06.870 18:39:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:06.870 18:39:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:06.870 18:39:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:06.870 18:39:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:06.870 18:39:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:06.870 18:39:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:06.870 18:39:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:06.870 18:39:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:06.870 18:39:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:06.870 18:39:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:06.870 18:39:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.870 18:39:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.870 18:39:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.870 18:39:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:06.870 "name": "Existed_Raid", 00:08:06.870 "uuid": "c0aa0b4b-7487-4102-a4f5-b9354989a36a", 00:08:06.870 "strip_size_kb": 64, 00:08:06.870 "state": "configuring", 00:08:06.870 "raid_level": "concat", 00:08:06.870 "superblock": true, 00:08:06.870 "num_base_bdevs": 3, 00:08:06.870 "num_base_bdevs_discovered": 1, 00:08:06.870 "num_base_bdevs_operational": 3, 00:08:06.870 "base_bdevs_list": [ 00:08:06.870 { 00:08:06.870 "name": "BaseBdev1", 00:08:06.870 "uuid": "0abc2bef-44fb-44dc-be76-0c87e9c33b36", 00:08:06.870 "is_configured": true, 00:08:06.870 "data_offset": 2048, 00:08:06.870 "data_size": 63488 00:08:06.870 }, 00:08:06.870 { 00:08:06.870 "name": "BaseBdev2", 00:08:06.870 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:06.870 "is_configured": false, 00:08:06.870 "data_offset": 0, 00:08:06.870 "data_size": 0 00:08:06.870 }, 00:08:06.870 { 00:08:06.870 "name": "BaseBdev3", 00:08:06.870 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:06.870 "is_configured": false, 00:08:06.870 "data_offset": 0, 00:08:06.870 "data_size": 0 00:08:06.870 } 00:08:06.870 ] 00:08:06.870 }' 00:08:06.870 18:39:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:06.870 18:39:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:07.130 18:39:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:07.130 18:39:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.130 18:39:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:07.130 [2024-12-15 18:39:07.481529] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:07.130 BaseBdev2 00:08:07.130 18:39:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.130 18:39:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:07.130 18:39:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:07.130 18:39:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:07.130 18:39:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:07.130 18:39:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:07.130 18:39:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:07.130 18:39:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:07.130 18:39:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.130 18:39:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:07.130 18:39:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.130 18:39:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:07.130 18:39:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.130 18:39:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:07.130 [ 00:08:07.130 { 00:08:07.130 "name": "BaseBdev2", 00:08:07.130 "aliases": [ 00:08:07.130 "1d7ee496-5290-4630-be11-214072acd2a6" 00:08:07.130 ], 00:08:07.130 "product_name": "Malloc disk", 00:08:07.130 "block_size": 512, 00:08:07.130 "num_blocks": 65536, 00:08:07.130 "uuid": "1d7ee496-5290-4630-be11-214072acd2a6", 00:08:07.130 "assigned_rate_limits": { 00:08:07.130 "rw_ios_per_sec": 0, 00:08:07.130 "rw_mbytes_per_sec": 0, 00:08:07.130 "r_mbytes_per_sec": 0, 00:08:07.130 "w_mbytes_per_sec": 0 00:08:07.130 }, 00:08:07.130 "claimed": true, 00:08:07.130 "claim_type": "exclusive_write", 00:08:07.130 "zoned": false, 00:08:07.130 "supported_io_types": { 00:08:07.130 "read": true, 00:08:07.130 "write": true, 00:08:07.130 "unmap": true, 00:08:07.130 "flush": true, 00:08:07.130 "reset": true, 00:08:07.130 "nvme_admin": false, 00:08:07.130 "nvme_io": false, 00:08:07.130 "nvme_io_md": false, 00:08:07.130 "write_zeroes": true, 00:08:07.130 "zcopy": true, 00:08:07.130 "get_zone_info": false, 00:08:07.130 "zone_management": false, 00:08:07.131 "zone_append": false, 00:08:07.131 "compare": false, 00:08:07.131 "compare_and_write": false, 00:08:07.131 "abort": true, 00:08:07.131 "seek_hole": false, 00:08:07.131 "seek_data": false, 00:08:07.131 "copy": true, 00:08:07.131 "nvme_iov_md": false 00:08:07.131 }, 00:08:07.131 "memory_domains": [ 00:08:07.131 { 00:08:07.131 "dma_device_id": "system", 00:08:07.131 "dma_device_type": 1 00:08:07.131 }, 00:08:07.131 { 00:08:07.131 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:07.131 "dma_device_type": 2 00:08:07.131 } 00:08:07.131 ], 00:08:07.131 "driver_specific": {} 00:08:07.131 } 00:08:07.131 ] 00:08:07.131 18:39:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.131 18:39:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:07.131 18:39:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:07.131 18:39:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:07.131 18:39:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:07.131 18:39:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:07.131 18:39:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:07.131 18:39:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:07.131 18:39:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:07.131 18:39:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:07.131 18:39:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:07.131 18:39:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:07.131 18:39:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:07.131 18:39:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:07.131 18:39:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:07.131 18:39:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.131 18:39:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:07.131 18:39:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:07.131 18:39:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.131 18:39:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:07.131 "name": "Existed_Raid", 00:08:07.131 "uuid": "c0aa0b4b-7487-4102-a4f5-b9354989a36a", 00:08:07.131 "strip_size_kb": 64, 00:08:07.131 "state": "configuring", 00:08:07.131 "raid_level": "concat", 00:08:07.131 "superblock": true, 00:08:07.131 "num_base_bdevs": 3, 00:08:07.131 "num_base_bdevs_discovered": 2, 00:08:07.131 "num_base_bdevs_operational": 3, 00:08:07.131 "base_bdevs_list": [ 00:08:07.131 { 00:08:07.131 "name": "BaseBdev1", 00:08:07.131 "uuid": "0abc2bef-44fb-44dc-be76-0c87e9c33b36", 00:08:07.131 "is_configured": true, 00:08:07.131 "data_offset": 2048, 00:08:07.131 "data_size": 63488 00:08:07.131 }, 00:08:07.131 { 00:08:07.131 "name": "BaseBdev2", 00:08:07.131 "uuid": "1d7ee496-5290-4630-be11-214072acd2a6", 00:08:07.131 "is_configured": true, 00:08:07.131 "data_offset": 2048, 00:08:07.131 "data_size": 63488 00:08:07.131 }, 00:08:07.131 { 00:08:07.131 "name": "BaseBdev3", 00:08:07.131 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:07.131 "is_configured": false, 00:08:07.131 "data_offset": 0, 00:08:07.131 "data_size": 0 00:08:07.131 } 00:08:07.131 ] 00:08:07.131 }' 00:08:07.131 18:39:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:07.131 18:39:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:07.701 18:39:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:07.701 18:39:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.701 18:39:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:07.701 [2024-12-15 18:39:07.986915] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:07.701 [2024-12-15 18:39:07.987168] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:08:07.701 [2024-12-15 18:39:07.987192] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:07.701 BaseBdev3 00:08:07.701 [2024-12-15 18:39:07.987592] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:08:07.701 [2024-12-15 18:39:07.987817] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:08:07.701 [2024-12-15 18:39:07.987838] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:08:07.701 18:39:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.701 [2024-12-15 18:39:07.988004] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:07.701 18:39:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:07.701 18:39:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:07.701 18:39:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:07.701 18:39:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:07.701 18:39:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:07.701 18:39:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:07.701 18:39:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:07.701 18:39:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.701 18:39:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:07.701 18:39:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.701 18:39:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:07.701 18:39:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.701 18:39:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:07.701 [ 00:08:07.701 { 00:08:07.701 "name": "BaseBdev3", 00:08:07.701 "aliases": [ 00:08:07.701 "9c7047af-3a14-490c-a952-6f6083ea9f45" 00:08:07.701 ], 00:08:07.701 "product_name": "Malloc disk", 00:08:07.701 "block_size": 512, 00:08:07.701 "num_blocks": 65536, 00:08:07.701 "uuid": "9c7047af-3a14-490c-a952-6f6083ea9f45", 00:08:07.701 "assigned_rate_limits": { 00:08:07.701 "rw_ios_per_sec": 0, 00:08:07.701 "rw_mbytes_per_sec": 0, 00:08:07.701 "r_mbytes_per_sec": 0, 00:08:07.701 "w_mbytes_per_sec": 0 00:08:07.701 }, 00:08:07.701 "claimed": true, 00:08:07.701 "claim_type": "exclusive_write", 00:08:07.701 "zoned": false, 00:08:07.701 "supported_io_types": { 00:08:07.701 "read": true, 00:08:07.701 "write": true, 00:08:07.701 "unmap": true, 00:08:07.701 "flush": true, 00:08:07.701 "reset": true, 00:08:07.701 "nvme_admin": false, 00:08:07.701 "nvme_io": false, 00:08:07.701 "nvme_io_md": false, 00:08:07.701 "write_zeroes": true, 00:08:07.701 "zcopy": true, 00:08:07.701 "get_zone_info": false, 00:08:07.701 "zone_management": false, 00:08:07.701 "zone_append": false, 00:08:07.701 "compare": false, 00:08:07.701 "compare_and_write": false, 00:08:07.701 "abort": true, 00:08:07.701 "seek_hole": false, 00:08:07.701 "seek_data": false, 00:08:07.701 "copy": true, 00:08:07.701 "nvme_iov_md": false 00:08:07.701 }, 00:08:07.701 "memory_domains": [ 00:08:07.701 { 00:08:07.701 "dma_device_id": "system", 00:08:07.701 "dma_device_type": 1 00:08:07.701 }, 00:08:07.701 { 00:08:07.701 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:07.701 "dma_device_type": 2 00:08:07.701 } 00:08:07.701 ], 00:08:07.701 "driver_specific": {} 00:08:07.701 } 00:08:07.701 ] 00:08:07.701 18:39:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.701 18:39:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:07.701 18:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:07.701 18:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:07.701 18:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:08:07.701 18:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:07.701 18:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:07.701 18:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:07.701 18:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:07.701 18:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:07.701 18:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:07.702 18:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:07.702 18:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:07.702 18:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:07.702 18:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:07.702 18:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:07.702 18:39:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.702 18:39:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:07.702 18:39:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.702 18:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:07.702 "name": "Existed_Raid", 00:08:07.702 "uuid": "c0aa0b4b-7487-4102-a4f5-b9354989a36a", 00:08:07.702 "strip_size_kb": 64, 00:08:07.702 "state": "online", 00:08:07.702 "raid_level": "concat", 00:08:07.702 "superblock": true, 00:08:07.702 "num_base_bdevs": 3, 00:08:07.702 "num_base_bdevs_discovered": 3, 00:08:07.702 "num_base_bdevs_operational": 3, 00:08:07.702 "base_bdevs_list": [ 00:08:07.702 { 00:08:07.702 "name": "BaseBdev1", 00:08:07.702 "uuid": "0abc2bef-44fb-44dc-be76-0c87e9c33b36", 00:08:07.702 "is_configured": true, 00:08:07.702 "data_offset": 2048, 00:08:07.702 "data_size": 63488 00:08:07.702 }, 00:08:07.702 { 00:08:07.702 "name": "BaseBdev2", 00:08:07.702 "uuid": "1d7ee496-5290-4630-be11-214072acd2a6", 00:08:07.702 "is_configured": true, 00:08:07.702 "data_offset": 2048, 00:08:07.702 "data_size": 63488 00:08:07.702 }, 00:08:07.702 { 00:08:07.702 "name": "BaseBdev3", 00:08:07.702 "uuid": "9c7047af-3a14-490c-a952-6f6083ea9f45", 00:08:07.702 "is_configured": true, 00:08:07.702 "data_offset": 2048, 00:08:07.702 "data_size": 63488 00:08:07.702 } 00:08:07.702 ] 00:08:07.702 }' 00:08:07.702 18:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:07.702 18:39:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.271 18:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:08.271 18:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:08.272 18:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:08.272 18:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:08.272 18:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:08.272 18:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:08.272 18:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:08.272 18:39:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.272 18:39:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.272 18:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:08.272 [2024-12-15 18:39:08.478495] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:08.272 18:39:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.272 18:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:08.272 "name": "Existed_Raid", 00:08:08.272 "aliases": [ 00:08:08.272 "c0aa0b4b-7487-4102-a4f5-b9354989a36a" 00:08:08.272 ], 00:08:08.272 "product_name": "Raid Volume", 00:08:08.272 "block_size": 512, 00:08:08.272 "num_blocks": 190464, 00:08:08.272 "uuid": "c0aa0b4b-7487-4102-a4f5-b9354989a36a", 00:08:08.272 "assigned_rate_limits": { 00:08:08.272 "rw_ios_per_sec": 0, 00:08:08.272 "rw_mbytes_per_sec": 0, 00:08:08.272 "r_mbytes_per_sec": 0, 00:08:08.272 "w_mbytes_per_sec": 0 00:08:08.272 }, 00:08:08.272 "claimed": false, 00:08:08.272 "zoned": false, 00:08:08.272 "supported_io_types": { 00:08:08.272 "read": true, 00:08:08.272 "write": true, 00:08:08.272 "unmap": true, 00:08:08.272 "flush": true, 00:08:08.272 "reset": true, 00:08:08.272 "nvme_admin": false, 00:08:08.272 "nvme_io": false, 00:08:08.272 "nvme_io_md": false, 00:08:08.272 "write_zeroes": true, 00:08:08.272 "zcopy": false, 00:08:08.272 "get_zone_info": false, 00:08:08.272 "zone_management": false, 00:08:08.272 "zone_append": false, 00:08:08.272 "compare": false, 00:08:08.272 "compare_and_write": false, 00:08:08.272 "abort": false, 00:08:08.272 "seek_hole": false, 00:08:08.272 "seek_data": false, 00:08:08.272 "copy": false, 00:08:08.272 "nvme_iov_md": false 00:08:08.272 }, 00:08:08.272 "memory_domains": [ 00:08:08.272 { 00:08:08.272 "dma_device_id": "system", 00:08:08.272 "dma_device_type": 1 00:08:08.272 }, 00:08:08.272 { 00:08:08.272 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:08.272 "dma_device_type": 2 00:08:08.272 }, 00:08:08.272 { 00:08:08.272 "dma_device_id": "system", 00:08:08.272 "dma_device_type": 1 00:08:08.272 }, 00:08:08.272 { 00:08:08.272 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:08.272 "dma_device_type": 2 00:08:08.272 }, 00:08:08.272 { 00:08:08.272 "dma_device_id": "system", 00:08:08.272 "dma_device_type": 1 00:08:08.272 }, 00:08:08.272 { 00:08:08.272 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:08.272 "dma_device_type": 2 00:08:08.272 } 00:08:08.272 ], 00:08:08.272 "driver_specific": { 00:08:08.272 "raid": { 00:08:08.272 "uuid": "c0aa0b4b-7487-4102-a4f5-b9354989a36a", 00:08:08.272 "strip_size_kb": 64, 00:08:08.272 "state": "online", 00:08:08.272 "raid_level": "concat", 00:08:08.272 "superblock": true, 00:08:08.272 "num_base_bdevs": 3, 00:08:08.272 "num_base_bdevs_discovered": 3, 00:08:08.272 "num_base_bdevs_operational": 3, 00:08:08.272 "base_bdevs_list": [ 00:08:08.272 { 00:08:08.272 "name": "BaseBdev1", 00:08:08.272 "uuid": "0abc2bef-44fb-44dc-be76-0c87e9c33b36", 00:08:08.272 "is_configured": true, 00:08:08.272 "data_offset": 2048, 00:08:08.272 "data_size": 63488 00:08:08.272 }, 00:08:08.272 { 00:08:08.272 "name": "BaseBdev2", 00:08:08.272 "uuid": "1d7ee496-5290-4630-be11-214072acd2a6", 00:08:08.272 "is_configured": true, 00:08:08.272 "data_offset": 2048, 00:08:08.272 "data_size": 63488 00:08:08.272 }, 00:08:08.272 { 00:08:08.272 "name": "BaseBdev3", 00:08:08.272 "uuid": "9c7047af-3a14-490c-a952-6f6083ea9f45", 00:08:08.272 "is_configured": true, 00:08:08.272 "data_offset": 2048, 00:08:08.272 "data_size": 63488 00:08:08.272 } 00:08:08.272 ] 00:08:08.272 } 00:08:08.272 } 00:08:08.272 }' 00:08:08.272 18:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:08.272 18:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:08.272 BaseBdev2 00:08:08.272 BaseBdev3' 00:08:08.272 18:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:08.272 18:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:08.272 18:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:08.272 18:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:08.272 18:39:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.272 18:39:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.272 18:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:08.272 18:39:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.272 18:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:08.272 18:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:08.272 18:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:08.272 18:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:08.272 18:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:08.272 18:39:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.272 18:39:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.272 18:39:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.532 18:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:08.532 18:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:08.532 18:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:08.532 18:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:08.532 18:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:08.532 18:39:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.532 18:39:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.532 18:39:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.532 18:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:08.532 18:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:08.532 18:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:08.532 18:39:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.532 18:39:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.532 [2024-12-15 18:39:08.777821] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:08.532 [2024-12-15 18:39:08.777855] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:08.532 [2024-12-15 18:39:08.777936] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:08.532 18:39:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.532 18:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:08.532 18:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:08:08.532 18:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:08.532 18:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:08:08.532 18:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:08.532 18:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:08:08.532 18:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:08.532 18:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:08.532 18:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:08.532 18:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:08.532 18:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:08.532 18:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:08.532 18:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:08.532 18:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:08.532 18:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:08.532 18:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:08.532 18:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:08.532 18:39:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.532 18:39:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.532 18:39:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.532 18:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:08.532 "name": "Existed_Raid", 00:08:08.532 "uuid": "c0aa0b4b-7487-4102-a4f5-b9354989a36a", 00:08:08.532 "strip_size_kb": 64, 00:08:08.532 "state": "offline", 00:08:08.532 "raid_level": "concat", 00:08:08.532 "superblock": true, 00:08:08.532 "num_base_bdevs": 3, 00:08:08.532 "num_base_bdevs_discovered": 2, 00:08:08.532 "num_base_bdevs_operational": 2, 00:08:08.532 "base_bdevs_list": [ 00:08:08.532 { 00:08:08.532 "name": null, 00:08:08.532 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:08.532 "is_configured": false, 00:08:08.532 "data_offset": 0, 00:08:08.532 "data_size": 63488 00:08:08.532 }, 00:08:08.532 { 00:08:08.532 "name": "BaseBdev2", 00:08:08.532 "uuid": "1d7ee496-5290-4630-be11-214072acd2a6", 00:08:08.532 "is_configured": true, 00:08:08.532 "data_offset": 2048, 00:08:08.532 "data_size": 63488 00:08:08.532 }, 00:08:08.532 { 00:08:08.532 "name": "BaseBdev3", 00:08:08.532 "uuid": "9c7047af-3a14-490c-a952-6f6083ea9f45", 00:08:08.532 "is_configured": true, 00:08:08.532 "data_offset": 2048, 00:08:08.532 "data_size": 63488 00:08:08.532 } 00:08:08.532 ] 00:08:08.532 }' 00:08:08.532 18:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:08.532 18:39:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.101 18:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:09.101 18:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:09.101 18:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:09.101 18:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:09.101 18:39:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.101 18:39:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.101 18:39:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.101 18:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:09.101 18:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:09.101 18:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:09.101 18:39:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.101 18:39:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.101 [2024-12-15 18:39:09.297744] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:09.101 18:39:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.101 18:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:09.101 18:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:09.101 18:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:09.101 18:39:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.101 18:39:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.101 18:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:09.101 18:39:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.101 18:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:09.101 18:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:09.101 18:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:09.101 18:39:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.101 18:39:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.101 [2024-12-15 18:39:09.374274] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:09.101 [2024-12-15 18:39:09.374427] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:08:09.101 18:39:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.101 18:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:09.101 18:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:09.101 18:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:09.101 18:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:09.101 18:39:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.101 18:39:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.101 18:39:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.101 18:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:09.101 18:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:09.101 18:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:09.101 18:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:09.101 18:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:09.101 18:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:09.101 18:39:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.101 18:39:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.101 BaseBdev2 00:08:09.101 18:39:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.101 18:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:09.101 18:39:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:09.101 18:39:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:09.101 18:39:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:09.101 18:39:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:09.101 18:39:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:09.101 18:39:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:09.101 18:39:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.101 18:39:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.101 18:39:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.101 18:39:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:09.101 18:39:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.101 18:39:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.101 [ 00:08:09.101 { 00:08:09.101 "name": "BaseBdev2", 00:08:09.101 "aliases": [ 00:08:09.101 "06b18a70-fdd2-4a71-bbbc-e9aecdebf4b0" 00:08:09.101 ], 00:08:09.101 "product_name": "Malloc disk", 00:08:09.101 "block_size": 512, 00:08:09.101 "num_blocks": 65536, 00:08:09.101 "uuid": "06b18a70-fdd2-4a71-bbbc-e9aecdebf4b0", 00:08:09.101 "assigned_rate_limits": { 00:08:09.101 "rw_ios_per_sec": 0, 00:08:09.101 "rw_mbytes_per_sec": 0, 00:08:09.101 "r_mbytes_per_sec": 0, 00:08:09.101 "w_mbytes_per_sec": 0 00:08:09.101 }, 00:08:09.101 "claimed": false, 00:08:09.101 "zoned": false, 00:08:09.101 "supported_io_types": { 00:08:09.101 "read": true, 00:08:09.101 "write": true, 00:08:09.101 "unmap": true, 00:08:09.101 "flush": true, 00:08:09.101 "reset": true, 00:08:09.101 "nvme_admin": false, 00:08:09.101 "nvme_io": false, 00:08:09.101 "nvme_io_md": false, 00:08:09.101 "write_zeroes": true, 00:08:09.101 "zcopy": true, 00:08:09.101 "get_zone_info": false, 00:08:09.101 "zone_management": false, 00:08:09.101 "zone_append": false, 00:08:09.101 "compare": false, 00:08:09.101 "compare_and_write": false, 00:08:09.101 "abort": true, 00:08:09.101 "seek_hole": false, 00:08:09.102 "seek_data": false, 00:08:09.102 "copy": true, 00:08:09.102 "nvme_iov_md": false 00:08:09.102 }, 00:08:09.102 "memory_domains": [ 00:08:09.102 { 00:08:09.102 "dma_device_id": "system", 00:08:09.102 "dma_device_type": 1 00:08:09.102 }, 00:08:09.102 { 00:08:09.102 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:09.102 "dma_device_type": 2 00:08:09.102 } 00:08:09.102 ], 00:08:09.102 "driver_specific": {} 00:08:09.102 } 00:08:09.102 ] 00:08:09.102 18:39:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.102 18:39:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:09.102 18:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:09.102 18:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:09.102 18:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:09.102 18:39:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.102 18:39:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.102 BaseBdev3 00:08:09.102 18:39:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.102 18:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:09.102 18:39:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:09.102 18:39:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:09.102 18:39:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:09.102 18:39:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:09.102 18:39:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:09.102 18:39:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:09.102 18:39:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.102 18:39:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.102 18:39:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.102 18:39:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:09.102 18:39:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.102 18:39:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.362 [ 00:08:09.362 { 00:08:09.362 "name": "BaseBdev3", 00:08:09.362 "aliases": [ 00:08:09.362 "a088d7b4-b1c9-49a3-af23-1676f622b914" 00:08:09.362 ], 00:08:09.362 "product_name": "Malloc disk", 00:08:09.362 "block_size": 512, 00:08:09.362 "num_blocks": 65536, 00:08:09.362 "uuid": "a088d7b4-b1c9-49a3-af23-1676f622b914", 00:08:09.362 "assigned_rate_limits": { 00:08:09.362 "rw_ios_per_sec": 0, 00:08:09.362 "rw_mbytes_per_sec": 0, 00:08:09.362 "r_mbytes_per_sec": 0, 00:08:09.362 "w_mbytes_per_sec": 0 00:08:09.362 }, 00:08:09.362 "claimed": false, 00:08:09.362 "zoned": false, 00:08:09.362 "supported_io_types": { 00:08:09.362 "read": true, 00:08:09.362 "write": true, 00:08:09.362 "unmap": true, 00:08:09.362 "flush": true, 00:08:09.362 "reset": true, 00:08:09.362 "nvme_admin": false, 00:08:09.362 "nvme_io": false, 00:08:09.362 "nvme_io_md": false, 00:08:09.362 "write_zeroes": true, 00:08:09.362 "zcopy": true, 00:08:09.362 "get_zone_info": false, 00:08:09.362 "zone_management": false, 00:08:09.362 "zone_append": false, 00:08:09.362 "compare": false, 00:08:09.362 "compare_and_write": false, 00:08:09.362 "abort": true, 00:08:09.362 "seek_hole": false, 00:08:09.362 "seek_data": false, 00:08:09.362 "copy": true, 00:08:09.362 "nvme_iov_md": false 00:08:09.362 }, 00:08:09.362 "memory_domains": [ 00:08:09.362 { 00:08:09.362 "dma_device_id": "system", 00:08:09.362 "dma_device_type": 1 00:08:09.362 }, 00:08:09.362 { 00:08:09.362 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:09.362 "dma_device_type": 2 00:08:09.362 } 00:08:09.362 ], 00:08:09.362 "driver_specific": {} 00:08:09.362 } 00:08:09.362 ] 00:08:09.362 18:39:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.362 18:39:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:09.362 18:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:09.362 18:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:09.362 18:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:09.362 18:39:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.362 18:39:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.362 [2024-12-15 18:39:09.566963] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:09.362 [2024-12-15 18:39:09.567101] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:09.362 [2024-12-15 18:39:09.567146] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:09.362 [2024-12-15 18:39:09.569213] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:09.362 18:39:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.362 18:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:09.362 18:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:09.362 18:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:09.362 18:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:09.362 18:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:09.362 18:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:09.362 18:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:09.362 18:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:09.362 18:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:09.362 18:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:09.362 18:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:09.362 18:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:09.362 18:39:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.362 18:39:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.362 18:39:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.362 18:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:09.362 "name": "Existed_Raid", 00:08:09.362 "uuid": "226cf711-535b-4d69-bad1-709450cba1eb", 00:08:09.362 "strip_size_kb": 64, 00:08:09.362 "state": "configuring", 00:08:09.362 "raid_level": "concat", 00:08:09.362 "superblock": true, 00:08:09.362 "num_base_bdevs": 3, 00:08:09.362 "num_base_bdevs_discovered": 2, 00:08:09.362 "num_base_bdevs_operational": 3, 00:08:09.362 "base_bdevs_list": [ 00:08:09.362 { 00:08:09.362 "name": "BaseBdev1", 00:08:09.362 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:09.362 "is_configured": false, 00:08:09.362 "data_offset": 0, 00:08:09.362 "data_size": 0 00:08:09.362 }, 00:08:09.362 { 00:08:09.362 "name": "BaseBdev2", 00:08:09.362 "uuid": "06b18a70-fdd2-4a71-bbbc-e9aecdebf4b0", 00:08:09.362 "is_configured": true, 00:08:09.362 "data_offset": 2048, 00:08:09.362 "data_size": 63488 00:08:09.362 }, 00:08:09.362 { 00:08:09.362 "name": "BaseBdev3", 00:08:09.362 "uuid": "a088d7b4-b1c9-49a3-af23-1676f622b914", 00:08:09.362 "is_configured": true, 00:08:09.362 "data_offset": 2048, 00:08:09.362 "data_size": 63488 00:08:09.362 } 00:08:09.362 ] 00:08:09.362 }' 00:08:09.362 18:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:09.363 18:39:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.622 18:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:09.622 18:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.622 18:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.622 [2024-12-15 18:39:10.030226] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:09.622 18:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.622 18:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:09.622 18:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:09.622 18:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:09.622 18:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:09.622 18:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:09.622 18:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:09.622 18:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:09.622 18:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:09.622 18:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:09.622 18:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:09.622 18:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:09.622 18:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:09.622 18:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.622 18:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.622 18:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.882 18:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:09.882 "name": "Existed_Raid", 00:08:09.882 "uuid": "226cf711-535b-4d69-bad1-709450cba1eb", 00:08:09.882 "strip_size_kb": 64, 00:08:09.882 "state": "configuring", 00:08:09.882 "raid_level": "concat", 00:08:09.882 "superblock": true, 00:08:09.882 "num_base_bdevs": 3, 00:08:09.882 "num_base_bdevs_discovered": 1, 00:08:09.882 "num_base_bdevs_operational": 3, 00:08:09.882 "base_bdevs_list": [ 00:08:09.882 { 00:08:09.882 "name": "BaseBdev1", 00:08:09.882 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:09.882 "is_configured": false, 00:08:09.882 "data_offset": 0, 00:08:09.882 "data_size": 0 00:08:09.882 }, 00:08:09.882 { 00:08:09.882 "name": null, 00:08:09.882 "uuid": "06b18a70-fdd2-4a71-bbbc-e9aecdebf4b0", 00:08:09.882 "is_configured": false, 00:08:09.882 "data_offset": 0, 00:08:09.882 "data_size": 63488 00:08:09.882 }, 00:08:09.882 { 00:08:09.882 "name": "BaseBdev3", 00:08:09.882 "uuid": "a088d7b4-b1c9-49a3-af23-1676f622b914", 00:08:09.882 "is_configured": true, 00:08:09.882 "data_offset": 2048, 00:08:09.882 "data_size": 63488 00:08:09.882 } 00:08:09.882 ] 00:08:09.882 }' 00:08:09.882 18:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:09.882 18:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.142 18:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:10.142 18:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:10.142 18:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.142 18:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.142 18:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.142 18:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:10.142 18:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:10.142 18:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.142 18:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.142 [2024-12-15 18:39:10.554385] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:10.142 BaseBdev1 00:08:10.142 18:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.142 18:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:10.142 18:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:10.142 18:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:10.142 18:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:10.142 18:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:10.142 18:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:10.142 18:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:10.142 18:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.142 18:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.142 18:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.142 18:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:10.142 18:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.142 18:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.142 [ 00:08:10.142 { 00:08:10.142 "name": "BaseBdev1", 00:08:10.142 "aliases": [ 00:08:10.142 "67d461be-e85a-4236-982a-07b64f385ed5" 00:08:10.142 ], 00:08:10.142 "product_name": "Malloc disk", 00:08:10.142 "block_size": 512, 00:08:10.402 "num_blocks": 65536, 00:08:10.402 "uuid": "67d461be-e85a-4236-982a-07b64f385ed5", 00:08:10.402 "assigned_rate_limits": { 00:08:10.402 "rw_ios_per_sec": 0, 00:08:10.402 "rw_mbytes_per_sec": 0, 00:08:10.402 "r_mbytes_per_sec": 0, 00:08:10.402 "w_mbytes_per_sec": 0 00:08:10.402 }, 00:08:10.402 "claimed": true, 00:08:10.402 "claim_type": "exclusive_write", 00:08:10.402 "zoned": false, 00:08:10.402 "supported_io_types": { 00:08:10.402 "read": true, 00:08:10.402 "write": true, 00:08:10.402 "unmap": true, 00:08:10.402 "flush": true, 00:08:10.402 "reset": true, 00:08:10.402 "nvme_admin": false, 00:08:10.402 "nvme_io": false, 00:08:10.402 "nvme_io_md": false, 00:08:10.402 "write_zeroes": true, 00:08:10.402 "zcopy": true, 00:08:10.402 "get_zone_info": false, 00:08:10.402 "zone_management": false, 00:08:10.402 "zone_append": false, 00:08:10.402 "compare": false, 00:08:10.402 "compare_and_write": false, 00:08:10.402 "abort": true, 00:08:10.402 "seek_hole": false, 00:08:10.402 "seek_data": false, 00:08:10.402 "copy": true, 00:08:10.402 "nvme_iov_md": false 00:08:10.402 }, 00:08:10.402 "memory_domains": [ 00:08:10.402 { 00:08:10.402 "dma_device_id": "system", 00:08:10.402 "dma_device_type": 1 00:08:10.402 }, 00:08:10.402 { 00:08:10.402 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:10.402 "dma_device_type": 2 00:08:10.402 } 00:08:10.402 ], 00:08:10.402 "driver_specific": {} 00:08:10.402 } 00:08:10.402 ] 00:08:10.402 18:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.402 18:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:10.402 18:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:10.402 18:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:10.402 18:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:10.402 18:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:10.402 18:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:10.402 18:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:10.402 18:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:10.402 18:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:10.402 18:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:10.402 18:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:10.402 18:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:10.402 18:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:10.402 18:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.402 18:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.402 18:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.403 18:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:10.403 "name": "Existed_Raid", 00:08:10.403 "uuid": "226cf711-535b-4d69-bad1-709450cba1eb", 00:08:10.403 "strip_size_kb": 64, 00:08:10.403 "state": "configuring", 00:08:10.403 "raid_level": "concat", 00:08:10.403 "superblock": true, 00:08:10.403 "num_base_bdevs": 3, 00:08:10.403 "num_base_bdevs_discovered": 2, 00:08:10.403 "num_base_bdevs_operational": 3, 00:08:10.403 "base_bdevs_list": [ 00:08:10.403 { 00:08:10.403 "name": "BaseBdev1", 00:08:10.403 "uuid": "67d461be-e85a-4236-982a-07b64f385ed5", 00:08:10.403 "is_configured": true, 00:08:10.403 "data_offset": 2048, 00:08:10.403 "data_size": 63488 00:08:10.403 }, 00:08:10.403 { 00:08:10.403 "name": null, 00:08:10.403 "uuid": "06b18a70-fdd2-4a71-bbbc-e9aecdebf4b0", 00:08:10.403 "is_configured": false, 00:08:10.403 "data_offset": 0, 00:08:10.403 "data_size": 63488 00:08:10.403 }, 00:08:10.403 { 00:08:10.403 "name": "BaseBdev3", 00:08:10.403 "uuid": "a088d7b4-b1c9-49a3-af23-1676f622b914", 00:08:10.403 "is_configured": true, 00:08:10.403 "data_offset": 2048, 00:08:10.403 "data_size": 63488 00:08:10.403 } 00:08:10.403 ] 00:08:10.403 }' 00:08:10.403 18:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:10.403 18:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.662 18:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:10.662 18:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:10.662 18:39:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.662 18:39:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.663 18:39:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.663 18:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:10.663 18:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:10.663 18:39:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.663 18:39:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.663 [2024-12-15 18:39:11.077590] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:10.663 18:39:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.663 18:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:10.663 18:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:10.663 18:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:10.663 18:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:10.663 18:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:10.663 18:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:10.663 18:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:10.663 18:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:10.663 18:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:10.663 18:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:10.663 18:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:10.663 18:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:10.663 18:39:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.663 18:39:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.922 18:39:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.922 18:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:10.922 "name": "Existed_Raid", 00:08:10.922 "uuid": "226cf711-535b-4d69-bad1-709450cba1eb", 00:08:10.922 "strip_size_kb": 64, 00:08:10.922 "state": "configuring", 00:08:10.922 "raid_level": "concat", 00:08:10.922 "superblock": true, 00:08:10.922 "num_base_bdevs": 3, 00:08:10.922 "num_base_bdevs_discovered": 1, 00:08:10.922 "num_base_bdevs_operational": 3, 00:08:10.922 "base_bdevs_list": [ 00:08:10.922 { 00:08:10.922 "name": "BaseBdev1", 00:08:10.922 "uuid": "67d461be-e85a-4236-982a-07b64f385ed5", 00:08:10.922 "is_configured": true, 00:08:10.922 "data_offset": 2048, 00:08:10.922 "data_size": 63488 00:08:10.922 }, 00:08:10.922 { 00:08:10.922 "name": null, 00:08:10.922 "uuid": "06b18a70-fdd2-4a71-bbbc-e9aecdebf4b0", 00:08:10.922 "is_configured": false, 00:08:10.922 "data_offset": 0, 00:08:10.922 "data_size": 63488 00:08:10.922 }, 00:08:10.922 { 00:08:10.923 "name": null, 00:08:10.923 "uuid": "a088d7b4-b1c9-49a3-af23-1676f622b914", 00:08:10.923 "is_configured": false, 00:08:10.923 "data_offset": 0, 00:08:10.923 "data_size": 63488 00:08:10.923 } 00:08:10.923 ] 00:08:10.923 }' 00:08:10.923 18:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:10.923 18:39:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.182 18:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:11.182 18:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:11.182 18:39:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.182 18:39:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.182 18:39:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.182 18:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:11.183 18:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:11.183 18:39:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.183 18:39:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.183 [2024-12-15 18:39:11.504890] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:11.183 18:39:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.183 18:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:11.183 18:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:11.183 18:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:11.183 18:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:11.183 18:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:11.183 18:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:11.183 18:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:11.183 18:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:11.183 18:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:11.183 18:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:11.183 18:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:11.183 18:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:11.183 18:39:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.183 18:39:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.183 18:39:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.183 18:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:11.183 "name": "Existed_Raid", 00:08:11.183 "uuid": "226cf711-535b-4d69-bad1-709450cba1eb", 00:08:11.183 "strip_size_kb": 64, 00:08:11.183 "state": "configuring", 00:08:11.183 "raid_level": "concat", 00:08:11.183 "superblock": true, 00:08:11.183 "num_base_bdevs": 3, 00:08:11.183 "num_base_bdevs_discovered": 2, 00:08:11.183 "num_base_bdevs_operational": 3, 00:08:11.183 "base_bdevs_list": [ 00:08:11.183 { 00:08:11.183 "name": "BaseBdev1", 00:08:11.183 "uuid": "67d461be-e85a-4236-982a-07b64f385ed5", 00:08:11.183 "is_configured": true, 00:08:11.183 "data_offset": 2048, 00:08:11.183 "data_size": 63488 00:08:11.183 }, 00:08:11.183 { 00:08:11.183 "name": null, 00:08:11.183 "uuid": "06b18a70-fdd2-4a71-bbbc-e9aecdebf4b0", 00:08:11.183 "is_configured": false, 00:08:11.183 "data_offset": 0, 00:08:11.183 "data_size": 63488 00:08:11.183 }, 00:08:11.183 { 00:08:11.183 "name": "BaseBdev3", 00:08:11.183 "uuid": "a088d7b4-b1c9-49a3-af23-1676f622b914", 00:08:11.183 "is_configured": true, 00:08:11.183 "data_offset": 2048, 00:08:11.183 "data_size": 63488 00:08:11.183 } 00:08:11.183 ] 00:08:11.183 }' 00:08:11.183 18:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:11.183 18:39:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.753 18:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:11.753 18:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:11.753 18:39:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.753 18:39:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.753 18:39:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.753 18:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:11.753 18:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:11.753 18:39:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.753 18:39:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.753 [2024-12-15 18:39:11.972122] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:11.753 18:39:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.753 18:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:11.753 18:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:11.753 18:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:11.753 18:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:11.753 18:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:11.753 18:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:11.753 18:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:11.753 18:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:11.753 18:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:11.753 18:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:11.753 18:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:11.753 18:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:11.753 18:39:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.753 18:39:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.753 18:39:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.753 18:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:11.753 "name": "Existed_Raid", 00:08:11.753 "uuid": "226cf711-535b-4d69-bad1-709450cba1eb", 00:08:11.753 "strip_size_kb": 64, 00:08:11.753 "state": "configuring", 00:08:11.753 "raid_level": "concat", 00:08:11.753 "superblock": true, 00:08:11.753 "num_base_bdevs": 3, 00:08:11.753 "num_base_bdevs_discovered": 1, 00:08:11.753 "num_base_bdevs_operational": 3, 00:08:11.753 "base_bdevs_list": [ 00:08:11.753 { 00:08:11.753 "name": null, 00:08:11.753 "uuid": "67d461be-e85a-4236-982a-07b64f385ed5", 00:08:11.753 "is_configured": false, 00:08:11.753 "data_offset": 0, 00:08:11.753 "data_size": 63488 00:08:11.753 }, 00:08:11.753 { 00:08:11.753 "name": null, 00:08:11.753 "uuid": "06b18a70-fdd2-4a71-bbbc-e9aecdebf4b0", 00:08:11.753 "is_configured": false, 00:08:11.753 "data_offset": 0, 00:08:11.753 "data_size": 63488 00:08:11.753 }, 00:08:11.753 { 00:08:11.753 "name": "BaseBdev3", 00:08:11.753 "uuid": "a088d7b4-b1c9-49a3-af23-1676f622b914", 00:08:11.753 "is_configured": true, 00:08:11.753 "data_offset": 2048, 00:08:11.753 "data_size": 63488 00:08:11.753 } 00:08:11.753 ] 00:08:11.753 }' 00:08:11.753 18:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:11.753 18:39:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:12.013 18:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:12.013 18:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:12.013 18:39:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.013 18:39:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:12.013 18:39:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.013 18:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:12.013 18:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:12.013 18:39:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.013 18:39:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:12.013 [2024-12-15 18:39:12.439050] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:12.013 18:39:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.013 18:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:12.013 18:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:12.013 18:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:12.013 18:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:12.013 18:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:12.013 18:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:12.013 18:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:12.013 18:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:12.013 18:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:12.013 18:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:12.013 18:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:12.013 18:39:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.013 18:39:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:12.013 18:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:12.273 18:39:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.273 18:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:12.273 "name": "Existed_Raid", 00:08:12.273 "uuid": "226cf711-535b-4d69-bad1-709450cba1eb", 00:08:12.273 "strip_size_kb": 64, 00:08:12.273 "state": "configuring", 00:08:12.273 "raid_level": "concat", 00:08:12.273 "superblock": true, 00:08:12.273 "num_base_bdevs": 3, 00:08:12.273 "num_base_bdevs_discovered": 2, 00:08:12.273 "num_base_bdevs_operational": 3, 00:08:12.273 "base_bdevs_list": [ 00:08:12.273 { 00:08:12.273 "name": null, 00:08:12.273 "uuid": "67d461be-e85a-4236-982a-07b64f385ed5", 00:08:12.273 "is_configured": false, 00:08:12.273 "data_offset": 0, 00:08:12.273 "data_size": 63488 00:08:12.273 }, 00:08:12.273 { 00:08:12.273 "name": "BaseBdev2", 00:08:12.273 "uuid": "06b18a70-fdd2-4a71-bbbc-e9aecdebf4b0", 00:08:12.273 "is_configured": true, 00:08:12.273 "data_offset": 2048, 00:08:12.273 "data_size": 63488 00:08:12.273 }, 00:08:12.273 { 00:08:12.273 "name": "BaseBdev3", 00:08:12.273 "uuid": "a088d7b4-b1c9-49a3-af23-1676f622b914", 00:08:12.273 "is_configured": true, 00:08:12.273 "data_offset": 2048, 00:08:12.273 "data_size": 63488 00:08:12.273 } 00:08:12.273 ] 00:08:12.273 }' 00:08:12.273 18:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:12.273 18:39:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:12.533 18:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:12.533 18:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:12.533 18:39:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.533 18:39:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:12.533 18:39:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.533 18:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:12.533 18:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:12.533 18:39:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.533 18:39:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:12.533 18:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:12.533 18:39:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.533 18:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 67d461be-e85a-4236-982a-07b64f385ed5 00:08:12.533 18:39:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.533 18:39:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:12.793 NewBaseBdev 00:08:12.793 [2024-12-15 18:39:12.986864] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:12.793 [2024-12-15 18:39:12.987056] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:08:12.793 [2024-12-15 18:39:12.987073] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:12.793 [2024-12-15 18:39:12.987350] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:08:12.793 [2024-12-15 18:39:12.987483] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:08:12.793 [2024-12-15 18:39:12.987493] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:08:12.793 [2024-12-15 18:39:12.987611] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:12.793 18:39:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.793 18:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:12.793 18:39:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:08:12.793 18:39:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:12.793 18:39:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:12.793 18:39:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:12.793 18:39:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:12.793 18:39:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:12.793 18:39:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.793 18:39:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:12.794 18:39:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.794 18:39:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:12.794 18:39:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.794 18:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:12.794 [ 00:08:12.794 { 00:08:12.794 "name": "NewBaseBdev", 00:08:12.794 "aliases": [ 00:08:12.794 "67d461be-e85a-4236-982a-07b64f385ed5" 00:08:12.794 ], 00:08:12.794 "product_name": "Malloc disk", 00:08:12.794 "block_size": 512, 00:08:12.794 "num_blocks": 65536, 00:08:12.794 "uuid": "67d461be-e85a-4236-982a-07b64f385ed5", 00:08:12.794 "assigned_rate_limits": { 00:08:12.794 "rw_ios_per_sec": 0, 00:08:12.794 "rw_mbytes_per_sec": 0, 00:08:12.794 "r_mbytes_per_sec": 0, 00:08:12.794 "w_mbytes_per_sec": 0 00:08:12.794 }, 00:08:12.794 "claimed": true, 00:08:12.794 "claim_type": "exclusive_write", 00:08:12.794 "zoned": false, 00:08:12.794 "supported_io_types": { 00:08:12.794 "read": true, 00:08:12.794 "write": true, 00:08:12.794 "unmap": true, 00:08:12.794 "flush": true, 00:08:12.794 "reset": true, 00:08:12.794 "nvme_admin": false, 00:08:12.794 "nvme_io": false, 00:08:12.794 "nvme_io_md": false, 00:08:12.794 "write_zeroes": true, 00:08:12.794 "zcopy": true, 00:08:12.794 "get_zone_info": false, 00:08:12.794 "zone_management": false, 00:08:12.794 "zone_append": false, 00:08:12.794 "compare": false, 00:08:12.794 "compare_and_write": false, 00:08:12.794 "abort": true, 00:08:12.794 "seek_hole": false, 00:08:12.794 "seek_data": false, 00:08:12.794 "copy": true, 00:08:12.794 "nvme_iov_md": false 00:08:12.794 }, 00:08:12.794 "memory_domains": [ 00:08:12.794 { 00:08:12.794 "dma_device_id": "system", 00:08:12.794 "dma_device_type": 1 00:08:12.794 }, 00:08:12.794 { 00:08:12.794 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:12.794 "dma_device_type": 2 00:08:12.794 } 00:08:12.794 ], 00:08:12.794 "driver_specific": {} 00:08:12.794 } 00:08:12.794 ] 00:08:12.794 18:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.794 18:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:12.794 18:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:08:12.794 18:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:12.794 18:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:12.794 18:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:12.794 18:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:12.794 18:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:12.794 18:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:12.794 18:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:12.794 18:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:12.794 18:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:12.794 18:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:12.794 18:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:12.794 18:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.794 18:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:12.794 18:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.794 18:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:12.794 "name": "Existed_Raid", 00:08:12.794 "uuid": "226cf711-535b-4d69-bad1-709450cba1eb", 00:08:12.794 "strip_size_kb": 64, 00:08:12.794 "state": "online", 00:08:12.794 "raid_level": "concat", 00:08:12.794 "superblock": true, 00:08:12.794 "num_base_bdevs": 3, 00:08:12.794 "num_base_bdevs_discovered": 3, 00:08:12.794 "num_base_bdevs_operational": 3, 00:08:12.794 "base_bdevs_list": [ 00:08:12.794 { 00:08:12.794 "name": "NewBaseBdev", 00:08:12.794 "uuid": "67d461be-e85a-4236-982a-07b64f385ed5", 00:08:12.794 "is_configured": true, 00:08:12.794 "data_offset": 2048, 00:08:12.794 "data_size": 63488 00:08:12.794 }, 00:08:12.794 { 00:08:12.794 "name": "BaseBdev2", 00:08:12.794 "uuid": "06b18a70-fdd2-4a71-bbbc-e9aecdebf4b0", 00:08:12.794 "is_configured": true, 00:08:12.794 "data_offset": 2048, 00:08:12.794 "data_size": 63488 00:08:12.794 }, 00:08:12.794 { 00:08:12.794 "name": "BaseBdev3", 00:08:12.794 "uuid": "a088d7b4-b1c9-49a3-af23-1676f622b914", 00:08:12.794 "is_configured": true, 00:08:12.794 "data_offset": 2048, 00:08:12.794 "data_size": 63488 00:08:12.794 } 00:08:12.794 ] 00:08:12.794 }' 00:08:12.794 18:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:12.794 18:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.054 18:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:13.054 18:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:13.054 18:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:13.054 18:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:13.054 18:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:13.054 18:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:13.054 18:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:13.054 18:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:13.054 18:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.054 18:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.054 [2024-12-15 18:39:13.458503] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:13.054 18:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.054 18:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:13.054 "name": "Existed_Raid", 00:08:13.054 "aliases": [ 00:08:13.054 "226cf711-535b-4d69-bad1-709450cba1eb" 00:08:13.054 ], 00:08:13.054 "product_name": "Raid Volume", 00:08:13.054 "block_size": 512, 00:08:13.054 "num_blocks": 190464, 00:08:13.054 "uuid": "226cf711-535b-4d69-bad1-709450cba1eb", 00:08:13.054 "assigned_rate_limits": { 00:08:13.054 "rw_ios_per_sec": 0, 00:08:13.054 "rw_mbytes_per_sec": 0, 00:08:13.054 "r_mbytes_per_sec": 0, 00:08:13.054 "w_mbytes_per_sec": 0 00:08:13.054 }, 00:08:13.054 "claimed": false, 00:08:13.054 "zoned": false, 00:08:13.054 "supported_io_types": { 00:08:13.054 "read": true, 00:08:13.054 "write": true, 00:08:13.054 "unmap": true, 00:08:13.054 "flush": true, 00:08:13.054 "reset": true, 00:08:13.054 "nvme_admin": false, 00:08:13.054 "nvme_io": false, 00:08:13.054 "nvme_io_md": false, 00:08:13.054 "write_zeroes": true, 00:08:13.054 "zcopy": false, 00:08:13.054 "get_zone_info": false, 00:08:13.054 "zone_management": false, 00:08:13.054 "zone_append": false, 00:08:13.054 "compare": false, 00:08:13.054 "compare_and_write": false, 00:08:13.054 "abort": false, 00:08:13.054 "seek_hole": false, 00:08:13.054 "seek_data": false, 00:08:13.054 "copy": false, 00:08:13.054 "nvme_iov_md": false 00:08:13.054 }, 00:08:13.054 "memory_domains": [ 00:08:13.054 { 00:08:13.054 "dma_device_id": "system", 00:08:13.054 "dma_device_type": 1 00:08:13.054 }, 00:08:13.054 { 00:08:13.054 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:13.054 "dma_device_type": 2 00:08:13.054 }, 00:08:13.054 { 00:08:13.054 "dma_device_id": "system", 00:08:13.054 "dma_device_type": 1 00:08:13.054 }, 00:08:13.054 { 00:08:13.054 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:13.054 "dma_device_type": 2 00:08:13.054 }, 00:08:13.054 { 00:08:13.054 "dma_device_id": "system", 00:08:13.054 "dma_device_type": 1 00:08:13.054 }, 00:08:13.054 { 00:08:13.054 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:13.054 "dma_device_type": 2 00:08:13.054 } 00:08:13.054 ], 00:08:13.054 "driver_specific": { 00:08:13.054 "raid": { 00:08:13.054 "uuid": "226cf711-535b-4d69-bad1-709450cba1eb", 00:08:13.054 "strip_size_kb": 64, 00:08:13.054 "state": "online", 00:08:13.054 "raid_level": "concat", 00:08:13.054 "superblock": true, 00:08:13.054 "num_base_bdevs": 3, 00:08:13.054 "num_base_bdevs_discovered": 3, 00:08:13.055 "num_base_bdevs_operational": 3, 00:08:13.055 "base_bdevs_list": [ 00:08:13.055 { 00:08:13.055 "name": "NewBaseBdev", 00:08:13.055 "uuid": "67d461be-e85a-4236-982a-07b64f385ed5", 00:08:13.055 "is_configured": true, 00:08:13.055 "data_offset": 2048, 00:08:13.055 "data_size": 63488 00:08:13.055 }, 00:08:13.055 { 00:08:13.055 "name": "BaseBdev2", 00:08:13.055 "uuid": "06b18a70-fdd2-4a71-bbbc-e9aecdebf4b0", 00:08:13.055 "is_configured": true, 00:08:13.055 "data_offset": 2048, 00:08:13.055 "data_size": 63488 00:08:13.055 }, 00:08:13.055 { 00:08:13.055 "name": "BaseBdev3", 00:08:13.055 "uuid": "a088d7b4-b1c9-49a3-af23-1676f622b914", 00:08:13.055 "is_configured": true, 00:08:13.055 "data_offset": 2048, 00:08:13.055 "data_size": 63488 00:08:13.055 } 00:08:13.055 ] 00:08:13.055 } 00:08:13.055 } 00:08:13.055 }' 00:08:13.315 18:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:13.315 18:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:13.315 BaseBdev2 00:08:13.315 BaseBdev3' 00:08:13.315 18:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:13.315 18:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:13.315 18:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:13.315 18:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:13.315 18:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.315 18:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.315 18:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:13.315 18:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.315 18:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:13.315 18:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:13.315 18:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:13.315 18:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:13.315 18:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:13.315 18:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.315 18:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.315 18:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.315 18:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:13.315 18:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:13.315 18:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:13.315 18:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:13.315 18:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.315 18:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.315 18:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:13.315 18:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.315 18:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:13.315 18:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:13.315 18:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:13.315 18:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.315 18:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.315 [2024-12-15 18:39:13.753548] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:13.315 [2024-12-15 18:39:13.753591] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:13.315 [2024-12-15 18:39:13.753679] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:13.315 [2024-12-15 18:39:13.753744] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:13.315 [2024-12-15 18:39:13.753758] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:08:13.575 18:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.575 18:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 79285 00:08:13.575 18:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 79285 ']' 00:08:13.575 18:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 79285 00:08:13.575 18:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:08:13.575 18:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:13.575 18:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79285 00:08:13.575 18:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:13.575 18:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:13.575 killing process with pid 79285 00:08:13.575 18:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79285' 00:08:13.575 18:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 79285 00:08:13.575 [2024-12-15 18:39:13.797685] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:13.575 18:39:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 79285 00:08:13.575 [2024-12-15 18:39:13.856867] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:13.834 18:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:13.834 00:08:13.834 real 0m9.059s 00:08:13.834 user 0m15.313s 00:08:13.834 sys 0m1.817s 00:08:13.834 ************************************ 00:08:13.834 END TEST raid_state_function_test_sb 00:08:13.834 ************************************ 00:08:13.834 18:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:13.834 18:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.834 18:39:14 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:08:13.834 18:39:14 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:13.834 18:39:14 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:13.834 18:39:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:13.834 ************************************ 00:08:13.834 START TEST raid_superblock_test 00:08:13.834 ************************************ 00:08:13.834 18:39:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 3 00:08:13.834 18:39:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:08:13.834 18:39:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:08:13.834 18:39:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:13.834 18:39:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:13.834 18:39:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:13.834 18:39:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:13.834 18:39:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:13.834 18:39:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:13.835 18:39:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:13.835 18:39:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:13.835 18:39:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:13.835 18:39:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:13.835 18:39:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:13.835 18:39:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:08:13.835 18:39:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:08:13.835 18:39:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:08:13.835 18:39:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=79889 00:08:13.835 18:39:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:13.835 18:39:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 79889 00:08:13.835 18:39:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 79889 ']' 00:08:13.835 18:39:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:13.835 18:39:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:13.835 18:39:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:13.835 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:13.835 18:39:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:13.835 18:39:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.094 [2024-12-15 18:39:14.345847] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:08:14.094 [2024-12-15 18:39:14.346055] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79889 ] 00:08:14.094 [2024-12-15 18:39:14.516474] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:14.355 [2024-12-15 18:39:14.559169] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.355 [2024-12-15 18:39:14.636726] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:14.355 [2024-12-15 18:39:14.636773] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:14.924 18:39:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:14.924 18:39:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:08:14.924 18:39:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:14.924 18:39:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:14.924 18:39:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:14.924 18:39:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:14.924 18:39:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:14.924 18:39:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:14.924 18:39:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:14.924 18:39:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:14.924 18:39:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:14.924 18:39:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.924 18:39:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.924 malloc1 00:08:14.924 18:39:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.924 18:39:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:14.924 18:39:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.924 18:39:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.924 [2024-12-15 18:39:15.198052] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:14.924 [2024-12-15 18:39:15.198222] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:14.924 [2024-12-15 18:39:15.198266] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:14.924 [2024-12-15 18:39:15.198312] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:14.924 [2024-12-15 18:39:15.200731] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:14.924 [2024-12-15 18:39:15.200818] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:14.924 pt1 00:08:14.924 18:39:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.924 18:39:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:14.924 18:39:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:14.924 18:39:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:14.924 18:39:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:14.924 18:39:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:14.924 18:39:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:14.924 18:39:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:14.924 18:39:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:14.924 18:39:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:14.924 18:39:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.924 18:39:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.924 malloc2 00:08:14.924 18:39:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.924 18:39:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:14.924 18:39:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.924 18:39:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.924 [2024-12-15 18:39:15.236491] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:14.924 [2024-12-15 18:39:15.236595] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:14.924 [2024-12-15 18:39:15.236628] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:14.924 [2024-12-15 18:39:15.236659] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:14.924 [2024-12-15 18:39:15.239038] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:14.924 [2024-12-15 18:39:15.239110] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:14.924 pt2 00:08:14.924 18:39:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.924 18:39:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:14.924 18:39:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:14.924 18:39:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:08:14.924 18:39:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:08:14.924 18:39:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:08:14.924 18:39:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:14.924 18:39:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:14.924 18:39:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:14.924 18:39:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:08:14.924 18:39:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.924 18:39:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.924 malloc3 00:08:14.924 18:39:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.924 18:39:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:14.924 18:39:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.924 18:39:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.924 [2024-12-15 18:39:15.271351] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:14.924 [2024-12-15 18:39:15.271461] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:14.924 [2024-12-15 18:39:15.271501] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:14.924 [2024-12-15 18:39:15.271537] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:14.924 [2024-12-15 18:39:15.273946] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:14.924 [2024-12-15 18:39:15.274016] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:14.924 pt3 00:08:14.924 18:39:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.924 18:39:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:14.924 18:39:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:14.924 18:39:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:08:14.924 18:39:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.924 18:39:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.924 [2024-12-15 18:39:15.283383] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:14.924 [2024-12-15 18:39:15.285583] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:14.924 [2024-12-15 18:39:15.285674] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:14.924 [2024-12-15 18:39:15.285848] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:08:14.924 [2024-12-15 18:39:15.285892] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:14.924 [2024-12-15 18:39:15.286192] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:08:14.924 [2024-12-15 18:39:15.286367] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:08:14.924 [2024-12-15 18:39:15.286409] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:08:14.924 [2024-12-15 18:39:15.286559] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:14.924 18:39:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.924 18:39:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:08:14.924 18:39:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:14.924 18:39:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:14.924 18:39:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:14.924 18:39:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:14.924 18:39:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:14.924 18:39:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:14.924 18:39:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:14.924 18:39:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:14.924 18:39:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:14.924 18:39:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:14.924 18:39:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.924 18:39:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:14.924 18:39:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.924 18:39:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.924 18:39:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:14.924 "name": "raid_bdev1", 00:08:14.924 "uuid": "07860b5d-67cc-4ab2-801b-c676e28382f6", 00:08:14.924 "strip_size_kb": 64, 00:08:14.924 "state": "online", 00:08:14.924 "raid_level": "concat", 00:08:14.924 "superblock": true, 00:08:14.924 "num_base_bdevs": 3, 00:08:14.924 "num_base_bdevs_discovered": 3, 00:08:14.924 "num_base_bdevs_operational": 3, 00:08:14.924 "base_bdevs_list": [ 00:08:14.924 { 00:08:14.924 "name": "pt1", 00:08:14.924 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:14.924 "is_configured": true, 00:08:14.924 "data_offset": 2048, 00:08:14.924 "data_size": 63488 00:08:14.924 }, 00:08:14.924 { 00:08:14.924 "name": "pt2", 00:08:14.924 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:14.925 "is_configured": true, 00:08:14.925 "data_offset": 2048, 00:08:14.925 "data_size": 63488 00:08:14.925 }, 00:08:14.925 { 00:08:14.925 "name": "pt3", 00:08:14.925 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:14.925 "is_configured": true, 00:08:14.925 "data_offset": 2048, 00:08:14.925 "data_size": 63488 00:08:14.925 } 00:08:14.925 ] 00:08:14.925 }' 00:08:14.925 18:39:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:14.925 18:39:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.495 18:39:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:15.495 18:39:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:15.495 18:39:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:15.495 18:39:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:15.495 18:39:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:15.495 18:39:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:15.495 18:39:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:15.495 18:39:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:15.495 18:39:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.495 18:39:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.495 [2024-12-15 18:39:15.726960] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:15.495 18:39:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.495 18:39:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:15.495 "name": "raid_bdev1", 00:08:15.495 "aliases": [ 00:08:15.495 "07860b5d-67cc-4ab2-801b-c676e28382f6" 00:08:15.495 ], 00:08:15.495 "product_name": "Raid Volume", 00:08:15.495 "block_size": 512, 00:08:15.495 "num_blocks": 190464, 00:08:15.495 "uuid": "07860b5d-67cc-4ab2-801b-c676e28382f6", 00:08:15.495 "assigned_rate_limits": { 00:08:15.495 "rw_ios_per_sec": 0, 00:08:15.495 "rw_mbytes_per_sec": 0, 00:08:15.495 "r_mbytes_per_sec": 0, 00:08:15.495 "w_mbytes_per_sec": 0 00:08:15.495 }, 00:08:15.495 "claimed": false, 00:08:15.495 "zoned": false, 00:08:15.495 "supported_io_types": { 00:08:15.495 "read": true, 00:08:15.495 "write": true, 00:08:15.495 "unmap": true, 00:08:15.495 "flush": true, 00:08:15.495 "reset": true, 00:08:15.495 "nvme_admin": false, 00:08:15.495 "nvme_io": false, 00:08:15.495 "nvme_io_md": false, 00:08:15.495 "write_zeroes": true, 00:08:15.495 "zcopy": false, 00:08:15.495 "get_zone_info": false, 00:08:15.495 "zone_management": false, 00:08:15.495 "zone_append": false, 00:08:15.495 "compare": false, 00:08:15.495 "compare_and_write": false, 00:08:15.495 "abort": false, 00:08:15.495 "seek_hole": false, 00:08:15.495 "seek_data": false, 00:08:15.495 "copy": false, 00:08:15.495 "nvme_iov_md": false 00:08:15.495 }, 00:08:15.495 "memory_domains": [ 00:08:15.495 { 00:08:15.495 "dma_device_id": "system", 00:08:15.495 "dma_device_type": 1 00:08:15.495 }, 00:08:15.495 { 00:08:15.495 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:15.495 "dma_device_type": 2 00:08:15.495 }, 00:08:15.495 { 00:08:15.495 "dma_device_id": "system", 00:08:15.495 "dma_device_type": 1 00:08:15.495 }, 00:08:15.495 { 00:08:15.495 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:15.495 "dma_device_type": 2 00:08:15.495 }, 00:08:15.495 { 00:08:15.495 "dma_device_id": "system", 00:08:15.495 "dma_device_type": 1 00:08:15.495 }, 00:08:15.495 { 00:08:15.495 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:15.495 "dma_device_type": 2 00:08:15.495 } 00:08:15.495 ], 00:08:15.495 "driver_specific": { 00:08:15.495 "raid": { 00:08:15.495 "uuid": "07860b5d-67cc-4ab2-801b-c676e28382f6", 00:08:15.495 "strip_size_kb": 64, 00:08:15.495 "state": "online", 00:08:15.495 "raid_level": "concat", 00:08:15.495 "superblock": true, 00:08:15.495 "num_base_bdevs": 3, 00:08:15.495 "num_base_bdevs_discovered": 3, 00:08:15.495 "num_base_bdevs_operational": 3, 00:08:15.495 "base_bdevs_list": [ 00:08:15.495 { 00:08:15.495 "name": "pt1", 00:08:15.495 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:15.495 "is_configured": true, 00:08:15.495 "data_offset": 2048, 00:08:15.495 "data_size": 63488 00:08:15.495 }, 00:08:15.495 { 00:08:15.495 "name": "pt2", 00:08:15.495 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:15.495 "is_configured": true, 00:08:15.495 "data_offset": 2048, 00:08:15.495 "data_size": 63488 00:08:15.495 }, 00:08:15.495 { 00:08:15.495 "name": "pt3", 00:08:15.495 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:15.495 "is_configured": true, 00:08:15.495 "data_offset": 2048, 00:08:15.495 "data_size": 63488 00:08:15.495 } 00:08:15.495 ] 00:08:15.495 } 00:08:15.495 } 00:08:15.495 }' 00:08:15.495 18:39:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:15.495 18:39:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:15.495 pt2 00:08:15.495 pt3' 00:08:15.495 18:39:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:15.495 18:39:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:15.495 18:39:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:15.495 18:39:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:15.495 18:39:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:15.495 18:39:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.495 18:39:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.495 18:39:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.495 18:39:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:15.495 18:39:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:15.495 18:39:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:15.495 18:39:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:15.495 18:39:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:15.495 18:39:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.495 18:39:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.495 18:39:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.495 18:39:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:15.495 18:39:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:15.496 18:39:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:15.496 18:39:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:08:15.496 18:39:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:15.496 18:39:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.496 18:39:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.756 18:39:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.756 18:39:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:15.756 18:39:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:15.756 18:39:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:15.756 18:39:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:15.756 18:39:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.756 18:39:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.756 [2024-12-15 18:39:15.986417] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:15.756 18:39:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.756 18:39:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=07860b5d-67cc-4ab2-801b-c676e28382f6 00:08:15.756 18:39:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 07860b5d-67cc-4ab2-801b-c676e28382f6 ']' 00:08:15.756 18:39:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:15.756 18:39:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.756 18:39:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.756 [2024-12-15 18:39:16.018077] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:15.756 [2024-12-15 18:39:16.018146] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:15.756 [2024-12-15 18:39:16.018244] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:15.756 [2024-12-15 18:39:16.018319] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:15.756 [2024-12-15 18:39:16.018336] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:08:15.756 18:39:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.756 18:39:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:15.756 18:39:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.756 18:39:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.756 18:39:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:15.756 18:39:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.756 18:39:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:15.756 18:39:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:15.756 18:39:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:15.756 18:39:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:15.756 18:39:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.756 18:39:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.756 18:39:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.756 18:39:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:15.756 18:39:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:15.756 18:39:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.756 18:39:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.756 18:39:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.756 18:39:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:15.756 18:39:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:08:15.756 18:39:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.756 18:39:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.756 18:39:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.756 18:39:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:15.756 18:39:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:15.756 18:39:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.756 18:39:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.756 18:39:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.756 18:39:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:15.756 18:39:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:15.756 18:39:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:08:15.756 18:39:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:15.756 18:39:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:08:15.756 18:39:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:15.756 18:39:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:08:15.756 18:39:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:15.756 18:39:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:15.756 18:39:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.756 18:39:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.756 [2024-12-15 18:39:16.189793] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:15.756 [2024-12-15 18:39:16.192048] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:15.756 [2024-12-15 18:39:16.192136] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:08:15.756 [2024-12-15 18:39:16.192210] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:15.756 [2024-12-15 18:39:16.192311] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:15.756 [2024-12-15 18:39:16.192403] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:08:15.756 [2024-12-15 18:39:16.192449] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:15.756 [2024-12-15 18:39:16.192510] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:08:16.016 request: 00:08:16.016 { 00:08:16.016 "name": "raid_bdev1", 00:08:16.016 "raid_level": "concat", 00:08:16.016 "base_bdevs": [ 00:08:16.016 "malloc1", 00:08:16.016 "malloc2", 00:08:16.016 "malloc3" 00:08:16.016 ], 00:08:16.016 "strip_size_kb": 64, 00:08:16.016 "superblock": false, 00:08:16.016 "method": "bdev_raid_create", 00:08:16.016 "req_id": 1 00:08:16.016 } 00:08:16.016 Got JSON-RPC error response 00:08:16.016 response: 00:08:16.016 { 00:08:16.016 "code": -17, 00:08:16.016 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:16.016 } 00:08:16.016 18:39:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:16.016 18:39:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:08:16.016 18:39:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:16.016 18:39:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:16.016 18:39:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:16.016 18:39:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:16.016 18:39:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:16.016 18:39:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.016 18:39:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.016 18:39:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.016 18:39:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:16.016 18:39:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:16.016 18:39:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:16.016 18:39:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.016 18:39:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.016 [2024-12-15 18:39:16.257640] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:16.016 [2024-12-15 18:39:16.257727] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:16.016 [2024-12-15 18:39:16.257760] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:08:16.016 [2024-12-15 18:39:16.257791] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:16.016 [2024-12-15 18:39:16.260188] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:16.016 [2024-12-15 18:39:16.260257] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:16.016 [2024-12-15 18:39:16.260347] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:16.016 [2024-12-15 18:39:16.260407] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:16.016 pt1 00:08:16.016 18:39:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.016 18:39:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:08:16.016 18:39:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:16.016 18:39:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:16.016 18:39:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:16.016 18:39:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:16.016 18:39:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:16.016 18:39:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:16.016 18:39:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:16.016 18:39:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:16.016 18:39:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:16.016 18:39:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:16.016 18:39:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.017 18:39:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.017 18:39:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:16.017 18:39:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.017 18:39:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:16.017 "name": "raid_bdev1", 00:08:16.017 "uuid": "07860b5d-67cc-4ab2-801b-c676e28382f6", 00:08:16.017 "strip_size_kb": 64, 00:08:16.017 "state": "configuring", 00:08:16.017 "raid_level": "concat", 00:08:16.017 "superblock": true, 00:08:16.017 "num_base_bdevs": 3, 00:08:16.017 "num_base_bdevs_discovered": 1, 00:08:16.017 "num_base_bdevs_operational": 3, 00:08:16.017 "base_bdevs_list": [ 00:08:16.017 { 00:08:16.017 "name": "pt1", 00:08:16.017 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:16.017 "is_configured": true, 00:08:16.017 "data_offset": 2048, 00:08:16.017 "data_size": 63488 00:08:16.017 }, 00:08:16.017 { 00:08:16.017 "name": null, 00:08:16.017 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:16.017 "is_configured": false, 00:08:16.017 "data_offset": 2048, 00:08:16.017 "data_size": 63488 00:08:16.017 }, 00:08:16.017 { 00:08:16.017 "name": null, 00:08:16.017 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:16.017 "is_configured": false, 00:08:16.017 "data_offset": 2048, 00:08:16.017 "data_size": 63488 00:08:16.017 } 00:08:16.017 ] 00:08:16.017 }' 00:08:16.017 18:39:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:16.017 18:39:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.586 18:39:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:08:16.586 18:39:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:16.586 18:39:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.586 18:39:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.586 [2024-12-15 18:39:16.732903] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:16.586 [2024-12-15 18:39:16.733059] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:16.586 [2024-12-15 18:39:16.733087] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:08:16.586 [2024-12-15 18:39:16.733103] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:16.586 [2024-12-15 18:39:16.733589] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:16.586 [2024-12-15 18:39:16.733611] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:16.586 [2024-12-15 18:39:16.733696] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:16.586 [2024-12-15 18:39:16.733731] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:16.586 pt2 00:08:16.586 18:39:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.586 18:39:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:08:16.586 18:39:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.586 18:39:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.586 [2024-12-15 18:39:16.744847] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:08:16.586 18:39:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.586 18:39:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:08:16.586 18:39:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:16.586 18:39:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:16.586 18:39:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:16.586 18:39:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:16.586 18:39:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:16.586 18:39:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:16.586 18:39:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:16.586 18:39:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:16.586 18:39:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:16.586 18:39:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:16.586 18:39:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.586 18:39:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.586 18:39:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:16.586 18:39:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.586 18:39:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:16.586 "name": "raid_bdev1", 00:08:16.586 "uuid": "07860b5d-67cc-4ab2-801b-c676e28382f6", 00:08:16.586 "strip_size_kb": 64, 00:08:16.586 "state": "configuring", 00:08:16.586 "raid_level": "concat", 00:08:16.586 "superblock": true, 00:08:16.586 "num_base_bdevs": 3, 00:08:16.586 "num_base_bdevs_discovered": 1, 00:08:16.586 "num_base_bdevs_operational": 3, 00:08:16.586 "base_bdevs_list": [ 00:08:16.586 { 00:08:16.586 "name": "pt1", 00:08:16.586 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:16.586 "is_configured": true, 00:08:16.586 "data_offset": 2048, 00:08:16.586 "data_size": 63488 00:08:16.586 }, 00:08:16.586 { 00:08:16.586 "name": null, 00:08:16.586 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:16.586 "is_configured": false, 00:08:16.586 "data_offset": 0, 00:08:16.586 "data_size": 63488 00:08:16.586 }, 00:08:16.586 { 00:08:16.586 "name": null, 00:08:16.586 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:16.586 "is_configured": false, 00:08:16.586 "data_offset": 2048, 00:08:16.586 "data_size": 63488 00:08:16.586 } 00:08:16.586 ] 00:08:16.586 }' 00:08:16.586 18:39:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:16.586 18:39:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.846 18:39:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:16.846 18:39:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:16.846 18:39:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:16.846 18:39:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.846 18:39:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.846 [2024-12-15 18:39:17.204086] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:16.846 [2024-12-15 18:39:17.204278] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:16.846 [2024-12-15 18:39:17.204326] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:08:16.846 [2024-12-15 18:39:17.204356] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:16.846 [2024-12-15 18:39:17.204868] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:16.846 [2024-12-15 18:39:17.204928] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:16.846 [2024-12-15 18:39:17.205044] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:16.846 [2024-12-15 18:39:17.205095] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:16.846 pt2 00:08:16.846 18:39:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.846 18:39:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:16.846 18:39:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:16.846 18:39:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:16.847 18:39:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.847 18:39:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.847 [2024-12-15 18:39:17.215996] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:16.847 [2024-12-15 18:39:17.216076] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:16.847 [2024-12-15 18:39:17.216110] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:08:16.847 [2024-12-15 18:39:17.216135] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:16.847 [2024-12-15 18:39:17.216487] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:16.847 [2024-12-15 18:39:17.216538] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:16.847 [2024-12-15 18:39:17.216616] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:08:16.847 [2024-12-15 18:39:17.216664] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:16.847 [2024-12-15 18:39:17.216782] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:08:16.847 [2024-12-15 18:39:17.216833] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:16.847 [2024-12-15 18:39:17.217111] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:16.847 [2024-12-15 18:39:17.217250] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:08:16.847 [2024-12-15 18:39:17.217289] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:08:16.847 [2024-12-15 18:39:17.217436] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:16.847 pt3 00:08:16.847 18:39:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.847 18:39:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:16.847 18:39:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:16.847 18:39:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:08:16.847 18:39:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:16.847 18:39:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:16.847 18:39:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:16.847 18:39:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:16.847 18:39:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:16.847 18:39:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:16.847 18:39:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:16.847 18:39:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:16.847 18:39:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:16.847 18:39:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:16.847 18:39:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:16.847 18:39:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.847 18:39:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.847 18:39:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.847 18:39:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:16.847 "name": "raid_bdev1", 00:08:16.847 "uuid": "07860b5d-67cc-4ab2-801b-c676e28382f6", 00:08:16.847 "strip_size_kb": 64, 00:08:16.847 "state": "online", 00:08:16.847 "raid_level": "concat", 00:08:16.847 "superblock": true, 00:08:16.847 "num_base_bdevs": 3, 00:08:16.847 "num_base_bdevs_discovered": 3, 00:08:16.847 "num_base_bdevs_operational": 3, 00:08:16.847 "base_bdevs_list": [ 00:08:16.847 { 00:08:16.847 "name": "pt1", 00:08:16.847 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:16.847 "is_configured": true, 00:08:16.847 "data_offset": 2048, 00:08:16.847 "data_size": 63488 00:08:16.847 }, 00:08:16.847 { 00:08:16.847 "name": "pt2", 00:08:16.847 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:16.847 "is_configured": true, 00:08:16.847 "data_offset": 2048, 00:08:16.847 "data_size": 63488 00:08:16.847 }, 00:08:16.847 { 00:08:16.847 "name": "pt3", 00:08:16.847 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:16.847 "is_configured": true, 00:08:16.847 "data_offset": 2048, 00:08:16.847 "data_size": 63488 00:08:16.847 } 00:08:16.847 ] 00:08:16.847 }' 00:08:16.847 18:39:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:16.847 18:39:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.472 18:39:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:17.472 18:39:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:17.472 18:39:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:17.472 18:39:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:17.472 18:39:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:17.472 18:39:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:17.472 18:39:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:17.472 18:39:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:17.472 18:39:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.472 18:39:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.472 [2024-12-15 18:39:17.695575] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:17.472 18:39:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.472 18:39:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:17.472 "name": "raid_bdev1", 00:08:17.472 "aliases": [ 00:08:17.472 "07860b5d-67cc-4ab2-801b-c676e28382f6" 00:08:17.472 ], 00:08:17.472 "product_name": "Raid Volume", 00:08:17.472 "block_size": 512, 00:08:17.472 "num_blocks": 190464, 00:08:17.472 "uuid": "07860b5d-67cc-4ab2-801b-c676e28382f6", 00:08:17.472 "assigned_rate_limits": { 00:08:17.472 "rw_ios_per_sec": 0, 00:08:17.472 "rw_mbytes_per_sec": 0, 00:08:17.472 "r_mbytes_per_sec": 0, 00:08:17.472 "w_mbytes_per_sec": 0 00:08:17.472 }, 00:08:17.472 "claimed": false, 00:08:17.472 "zoned": false, 00:08:17.472 "supported_io_types": { 00:08:17.472 "read": true, 00:08:17.472 "write": true, 00:08:17.472 "unmap": true, 00:08:17.472 "flush": true, 00:08:17.472 "reset": true, 00:08:17.472 "nvme_admin": false, 00:08:17.472 "nvme_io": false, 00:08:17.472 "nvme_io_md": false, 00:08:17.472 "write_zeroes": true, 00:08:17.472 "zcopy": false, 00:08:17.472 "get_zone_info": false, 00:08:17.472 "zone_management": false, 00:08:17.472 "zone_append": false, 00:08:17.472 "compare": false, 00:08:17.472 "compare_and_write": false, 00:08:17.472 "abort": false, 00:08:17.472 "seek_hole": false, 00:08:17.472 "seek_data": false, 00:08:17.472 "copy": false, 00:08:17.472 "nvme_iov_md": false 00:08:17.472 }, 00:08:17.472 "memory_domains": [ 00:08:17.472 { 00:08:17.472 "dma_device_id": "system", 00:08:17.472 "dma_device_type": 1 00:08:17.472 }, 00:08:17.472 { 00:08:17.472 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:17.472 "dma_device_type": 2 00:08:17.472 }, 00:08:17.472 { 00:08:17.472 "dma_device_id": "system", 00:08:17.472 "dma_device_type": 1 00:08:17.472 }, 00:08:17.472 { 00:08:17.472 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:17.472 "dma_device_type": 2 00:08:17.472 }, 00:08:17.472 { 00:08:17.472 "dma_device_id": "system", 00:08:17.472 "dma_device_type": 1 00:08:17.472 }, 00:08:17.472 { 00:08:17.472 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:17.472 "dma_device_type": 2 00:08:17.472 } 00:08:17.472 ], 00:08:17.472 "driver_specific": { 00:08:17.472 "raid": { 00:08:17.472 "uuid": "07860b5d-67cc-4ab2-801b-c676e28382f6", 00:08:17.472 "strip_size_kb": 64, 00:08:17.472 "state": "online", 00:08:17.472 "raid_level": "concat", 00:08:17.472 "superblock": true, 00:08:17.472 "num_base_bdevs": 3, 00:08:17.472 "num_base_bdevs_discovered": 3, 00:08:17.472 "num_base_bdevs_operational": 3, 00:08:17.472 "base_bdevs_list": [ 00:08:17.472 { 00:08:17.472 "name": "pt1", 00:08:17.472 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:17.472 "is_configured": true, 00:08:17.472 "data_offset": 2048, 00:08:17.472 "data_size": 63488 00:08:17.472 }, 00:08:17.472 { 00:08:17.472 "name": "pt2", 00:08:17.472 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:17.472 "is_configured": true, 00:08:17.472 "data_offset": 2048, 00:08:17.472 "data_size": 63488 00:08:17.472 }, 00:08:17.472 { 00:08:17.472 "name": "pt3", 00:08:17.472 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:17.472 "is_configured": true, 00:08:17.472 "data_offset": 2048, 00:08:17.472 "data_size": 63488 00:08:17.472 } 00:08:17.472 ] 00:08:17.472 } 00:08:17.472 } 00:08:17.472 }' 00:08:17.472 18:39:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:17.473 18:39:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:17.473 pt2 00:08:17.473 pt3' 00:08:17.473 18:39:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:17.473 18:39:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:17.473 18:39:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:17.473 18:39:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:17.473 18:39:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:17.473 18:39:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.473 18:39:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.473 18:39:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.473 18:39:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:17.473 18:39:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:17.473 18:39:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:17.473 18:39:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:17.473 18:39:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:17.473 18:39:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.473 18:39:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.473 18:39:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.473 18:39:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:17.473 18:39:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:17.473 18:39:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:17.473 18:39:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:08:17.473 18:39:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.473 18:39:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.473 18:39:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:17.473 18:39:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.733 18:39:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:17.733 18:39:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:17.733 18:39:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:17.733 18:39:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:17.733 18:39:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.733 18:39:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.733 [2024-12-15 18:39:17.955091] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:17.733 18:39:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.733 18:39:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 07860b5d-67cc-4ab2-801b-c676e28382f6 '!=' 07860b5d-67cc-4ab2-801b-c676e28382f6 ']' 00:08:17.733 18:39:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:08:17.733 18:39:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:17.733 18:39:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:17.733 18:39:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 79889 00:08:17.733 18:39:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 79889 ']' 00:08:17.733 18:39:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 79889 00:08:17.733 18:39:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:08:17.733 18:39:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:17.733 18:39:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79889 00:08:17.733 18:39:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:17.733 18:39:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:17.733 18:39:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79889' 00:08:17.733 killing process with pid 79889 00:08:17.733 18:39:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 79889 00:08:17.733 [2024-12-15 18:39:18.041047] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:17.733 18:39:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 79889 00:08:17.733 [2024-12-15 18:39:18.041188] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:17.733 [2024-12-15 18:39:18.041288] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:17.733 [2024-12-15 18:39:18.041303] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:08:17.733 [2024-12-15 18:39:18.103805] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:17.992 ************************************ 00:08:17.992 END TEST raid_superblock_test 00:08:17.992 ************************************ 00:08:17.992 18:39:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:17.992 00:08:17.992 real 0m4.169s 00:08:17.992 user 0m6.426s 00:08:17.992 sys 0m0.956s 00:08:17.992 18:39:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:17.992 18:39:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.252 18:39:18 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:08:18.252 18:39:18 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:18.253 18:39:18 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:18.253 18:39:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:18.253 ************************************ 00:08:18.253 START TEST raid_read_error_test 00:08:18.253 ************************************ 00:08:18.253 18:39:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 read 00:08:18.253 18:39:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:08:18.253 18:39:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:08:18.253 18:39:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:18.253 18:39:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:18.253 18:39:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:18.253 18:39:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:18.253 18:39:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:18.253 18:39:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:18.253 18:39:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:18.253 18:39:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:18.253 18:39:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:18.253 18:39:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:08:18.253 18:39:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:18.253 18:39:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:18.253 18:39:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:18.253 18:39:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:18.253 18:39:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:18.253 18:39:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:18.253 18:39:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:18.253 18:39:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:18.253 18:39:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:18.253 18:39:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:08:18.253 18:39:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:18.253 18:39:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:18.253 18:39:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:18.253 18:39:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.sDuBlPOdOC 00:08:18.253 18:39:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=80132 00:08:18.253 18:39:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 80132 00:08:18.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:18.253 18:39:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 80132 ']' 00:08:18.253 18:39:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:18.253 18:39:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:18.253 18:39:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:18.253 18:39:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:18.253 18:39:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:18.253 18:39:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.253 [2024-12-15 18:39:18.609018] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:08:18.253 [2024-12-15 18:39:18.609286] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80132 ] 00:08:18.513 [2024-12-15 18:39:18.791744] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:18.513 [2024-12-15 18:39:18.833564] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.513 [2024-12-15 18:39:18.909461] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:18.513 [2024-12-15 18:39:18.909506] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:19.082 18:39:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:19.082 18:39:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:19.082 18:39:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:19.083 18:39:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:19.083 18:39:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.083 18:39:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.083 BaseBdev1_malloc 00:08:19.083 18:39:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.083 18:39:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:19.083 18:39:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.083 18:39:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.083 true 00:08:19.083 18:39:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.083 18:39:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:19.083 18:39:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.083 18:39:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.083 [2024-12-15 18:39:19.462859] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:19.083 [2024-12-15 18:39:19.463023] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:19.083 [2024-12-15 18:39:19.463068] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:19.083 [2024-12-15 18:39:19.463083] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:19.083 [2024-12-15 18:39:19.466455] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:19.083 [2024-12-15 18:39:19.466562] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:19.083 BaseBdev1 00:08:19.083 18:39:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.083 18:39:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:19.083 18:39:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:19.083 18:39:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.083 18:39:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.083 BaseBdev2_malloc 00:08:19.083 18:39:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.083 18:39:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:19.083 18:39:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.083 18:39:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.083 true 00:08:19.083 18:39:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.083 18:39:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:19.083 18:39:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.083 18:39:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.083 [2024-12-15 18:39:19.503782] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:19.083 [2024-12-15 18:39:19.503847] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:19.083 [2024-12-15 18:39:19.503868] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:19.083 [2024-12-15 18:39:19.503877] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:19.083 [2024-12-15 18:39:19.506030] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:19.083 [2024-12-15 18:39:19.506066] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:19.083 BaseBdev2 00:08:19.083 18:39:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.083 18:39:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:19.083 18:39:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:08:19.083 18:39:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.083 18:39:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.343 BaseBdev3_malloc 00:08:19.343 18:39:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.343 18:39:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:08:19.343 18:39:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.343 18:39:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.343 true 00:08:19.343 18:39:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.343 18:39:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:08:19.343 18:39:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.343 18:39:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.343 [2024-12-15 18:39:19.544770] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:08:19.343 [2024-12-15 18:39:19.544831] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:19.343 [2024-12-15 18:39:19.544855] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:19.343 [2024-12-15 18:39:19.544864] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:19.343 [2024-12-15 18:39:19.547063] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:19.343 [2024-12-15 18:39:19.547097] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:08:19.343 BaseBdev3 00:08:19.343 18:39:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.343 18:39:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:08:19.343 18:39:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.343 18:39:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.343 [2024-12-15 18:39:19.556808] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:19.343 [2024-12-15 18:39:19.558689] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:19.343 [2024-12-15 18:39:19.558772] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:19.343 [2024-12-15 18:39:19.558959] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:08:19.343 [2024-12-15 18:39:19.558978] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:19.343 [2024-12-15 18:39:19.559233] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:08:19.343 [2024-12-15 18:39:19.559395] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:08:19.343 [2024-12-15 18:39:19.559406] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:08:19.343 [2024-12-15 18:39:19.559532] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:19.343 18:39:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.343 18:39:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:08:19.343 18:39:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:19.343 18:39:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:19.343 18:39:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:19.343 18:39:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:19.343 18:39:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:19.343 18:39:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:19.343 18:39:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:19.343 18:39:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:19.343 18:39:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:19.343 18:39:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:19.343 18:39:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:19.343 18:39:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.343 18:39:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.343 18:39:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.343 18:39:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:19.343 "name": "raid_bdev1", 00:08:19.343 "uuid": "cf61d5f8-0d0c-485f-90df-25e938dd309c", 00:08:19.343 "strip_size_kb": 64, 00:08:19.343 "state": "online", 00:08:19.343 "raid_level": "concat", 00:08:19.343 "superblock": true, 00:08:19.343 "num_base_bdevs": 3, 00:08:19.343 "num_base_bdevs_discovered": 3, 00:08:19.343 "num_base_bdevs_operational": 3, 00:08:19.343 "base_bdevs_list": [ 00:08:19.343 { 00:08:19.343 "name": "BaseBdev1", 00:08:19.343 "uuid": "35546384-dd3b-5ae9-8c77-c11fcfd74a61", 00:08:19.343 "is_configured": true, 00:08:19.343 "data_offset": 2048, 00:08:19.343 "data_size": 63488 00:08:19.343 }, 00:08:19.343 { 00:08:19.343 "name": "BaseBdev2", 00:08:19.343 "uuid": "b49b6084-2dbc-5a8b-8ca2-28de3241d339", 00:08:19.343 "is_configured": true, 00:08:19.343 "data_offset": 2048, 00:08:19.343 "data_size": 63488 00:08:19.343 }, 00:08:19.343 { 00:08:19.343 "name": "BaseBdev3", 00:08:19.343 "uuid": "f5b056fc-85b4-5055-8aaf-65b2639e52f9", 00:08:19.343 "is_configured": true, 00:08:19.343 "data_offset": 2048, 00:08:19.343 "data_size": 63488 00:08:19.343 } 00:08:19.343 ] 00:08:19.343 }' 00:08:19.343 18:39:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:19.343 18:39:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.603 18:39:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:19.603 18:39:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:19.863 [2024-12-15 18:39:20.136325] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:08:20.802 18:39:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:20.803 18:39:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.803 18:39:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.803 18:39:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.803 18:39:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:20.803 18:39:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:08:20.803 18:39:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:08:20.803 18:39:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:08:20.803 18:39:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:20.803 18:39:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:20.803 18:39:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:20.803 18:39:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:20.803 18:39:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:20.803 18:39:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:20.803 18:39:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:20.803 18:39:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:20.803 18:39:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:20.803 18:39:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:20.803 18:39:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.803 18:39:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.803 18:39:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:20.803 18:39:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.803 18:39:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:20.803 "name": "raid_bdev1", 00:08:20.803 "uuid": "cf61d5f8-0d0c-485f-90df-25e938dd309c", 00:08:20.803 "strip_size_kb": 64, 00:08:20.803 "state": "online", 00:08:20.803 "raid_level": "concat", 00:08:20.803 "superblock": true, 00:08:20.803 "num_base_bdevs": 3, 00:08:20.803 "num_base_bdevs_discovered": 3, 00:08:20.803 "num_base_bdevs_operational": 3, 00:08:20.803 "base_bdevs_list": [ 00:08:20.803 { 00:08:20.803 "name": "BaseBdev1", 00:08:20.803 "uuid": "35546384-dd3b-5ae9-8c77-c11fcfd74a61", 00:08:20.803 "is_configured": true, 00:08:20.803 "data_offset": 2048, 00:08:20.803 "data_size": 63488 00:08:20.803 }, 00:08:20.803 { 00:08:20.803 "name": "BaseBdev2", 00:08:20.803 "uuid": "b49b6084-2dbc-5a8b-8ca2-28de3241d339", 00:08:20.803 "is_configured": true, 00:08:20.803 "data_offset": 2048, 00:08:20.803 "data_size": 63488 00:08:20.803 }, 00:08:20.803 { 00:08:20.803 "name": "BaseBdev3", 00:08:20.803 "uuid": "f5b056fc-85b4-5055-8aaf-65b2639e52f9", 00:08:20.803 "is_configured": true, 00:08:20.803 "data_offset": 2048, 00:08:20.803 "data_size": 63488 00:08:20.803 } 00:08:20.803 ] 00:08:20.803 }' 00:08:20.803 18:39:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:20.803 18:39:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.372 18:39:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:21.372 18:39:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.372 18:39:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.372 [2024-12-15 18:39:21.528902] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:21.372 [2024-12-15 18:39:21.528998] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:21.372 [2024-12-15 18:39:21.531684] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:21.372 [2024-12-15 18:39:21.531749] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:21.372 [2024-12-15 18:39:21.531784] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:21.372 [2024-12-15 18:39:21.531796] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:08:21.372 { 00:08:21.372 "results": [ 00:08:21.372 { 00:08:21.372 "job": "raid_bdev1", 00:08:21.372 "core_mask": "0x1", 00:08:21.372 "workload": "randrw", 00:08:21.372 "percentage": 50, 00:08:21.372 "status": "finished", 00:08:21.372 "queue_depth": 1, 00:08:21.372 "io_size": 131072, 00:08:21.372 "runtime": 1.39329, 00:08:21.372 "iops": 15624.17012969303, 00:08:21.372 "mibps": 1953.0212662116287, 00:08:21.372 "io_failed": 1, 00:08:21.372 "io_timeout": 0, 00:08:21.372 "avg_latency_us": 88.50662628150995, 00:08:21.372 "min_latency_us": 24.817467248908297, 00:08:21.372 "max_latency_us": 1581.1633187772925 00:08:21.372 } 00:08:21.372 ], 00:08:21.372 "core_count": 1 00:08:21.372 } 00:08:21.372 18:39:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.372 18:39:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 80132 00:08:21.372 18:39:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 80132 ']' 00:08:21.372 18:39:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 80132 00:08:21.372 18:39:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:08:21.372 18:39:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:21.372 18:39:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80132 00:08:21.372 killing process with pid 80132 00:08:21.372 18:39:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:21.372 18:39:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:21.372 18:39:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80132' 00:08:21.372 18:39:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 80132 00:08:21.372 [2024-12-15 18:39:21.570143] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:21.372 18:39:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 80132 00:08:21.372 [2024-12-15 18:39:21.596866] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:21.372 18:39:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.sDuBlPOdOC 00:08:21.372 18:39:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:21.372 18:39:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:21.372 ************************************ 00:08:21.372 END TEST raid_read_error_test 00:08:21.372 18:39:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:08:21.372 18:39:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:08:21.372 18:39:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:21.372 18:39:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:21.373 18:39:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:08:21.373 00:08:21.373 real 0m3.306s 00:08:21.373 user 0m4.164s 00:08:21.373 sys 0m0.596s 00:08:21.373 18:39:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:21.373 18:39:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.373 ************************************ 00:08:21.633 18:39:21 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:08:21.633 18:39:21 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:21.633 18:39:21 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:21.633 18:39:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:21.633 ************************************ 00:08:21.633 START TEST raid_write_error_test 00:08:21.633 ************************************ 00:08:21.633 18:39:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 write 00:08:21.633 18:39:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:08:21.633 18:39:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:08:21.633 18:39:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:21.633 18:39:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:21.633 18:39:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:21.633 18:39:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:21.633 18:39:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:21.633 18:39:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:21.633 18:39:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:21.633 18:39:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:21.633 18:39:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:21.633 18:39:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:08:21.633 18:39:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:21.633 18:39:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:21.633 18:39:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:21.633 18:39:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:21.633 18:39:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:21.633 18:39:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:21.633 18:39:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:21.633 18:39:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:21.633 18:39:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:21.633 18:39:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:08:21.633 18:39:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:21.633 18:39:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:21.633 18:39:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:21.633 18:39:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.u2Qw6W7BM8 00:08:21.633 18:39:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=80265 00:08:21.633 18:39:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:21.633 18:39:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 80265 00:08:21.633 18:39:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 80265 ']' 00:08:21.633 18:39:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:21.633 18:39:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:21.633 18:39:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:21.633 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:21.633 18:39:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:21.633 18:39:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.633 [2024-12-15 18:39:21.977262] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:08:21.633 [2024-12-15 18:39:21.977455] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80265 ] 00:08:21.893 [2024-12-15 18:39:22.125961] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:21.893 [2024-12-15 18:39:22.155606] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.893 [2024-12-15 18:39:22.200022] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:21.893 [2024-12-15 18:39:22.200143] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:22.462 18:39:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:22.462 18:39:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:22.462 18:39:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:22.462 18:39:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:22.462 18:39:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.462 18:39:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.462 BaseBdev1_malloc 00:08:22.462 18:39:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.462 18:39:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:22.462 18:39:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.462 18:39:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.462 true 00:08:22.462 18:39:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.462 18:39:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:22.462 18:39:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.462 18:39:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.462 [2024-12-15 18:39:22.840501] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:22.462 [2024-12-15 18:39:22.840559] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:22.462 [2024-12-15 18:39:22.840584] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:22.462 [2024-12-15 18:39:22.840594] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:22.462 [2024-12-15 18:39:22.842771] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:22.462 [2024-12-15 18:39:22.842926] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:22.462 BaseBdev1 00:08:22.462 18:39:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.462 18:39:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:22.462 18:39:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:22.462 18:39:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.462 18:39:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.462 BaseBdev2_malloc 00:08:22.462 18:39:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.462 18:39:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:22.462 18:39:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.462 18:39:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.462 true 00:08:22.462 18:39:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.462 18:39:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:22.462 18:39:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.462 18:39:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.462 [2024-12-15 18:39:22.881582] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:22.462 [2024-12-15 18:39:22.881653] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:22.462 [2024-12-15 18:39:22.881681] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:22.462 [2024-12-15 18:39:22.881691] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:22.462 [2024-12-15 18:39:22.884014] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:22.462 [2024-12-15 18:39:22.884057] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:22.462 BaseBdev2 00:08:22.462 18:39:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.462 18:39:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:22.462 18:39:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:08:22.462 18:39:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.462 18:39:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.722 BaseBdev3_malloc 00:08:22.722 18:39:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.722 18:39:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:08:22.722 18:39:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.722 18:39:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.722 true 00:08:22.722 18:39:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.722 18:39:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:08:22.722 18:39:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.722 18:39:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.722 [2024-12-15 18:39:22.922823] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:08:22.722 [2024-12-15 18:39:22.922931] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:22.723 [2024-12-15 18:39:22.922975] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:22.723 [2024-12-15 18:39:22.923015] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:22.723 [2024-12-15 18:39:22.925268] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:22.723 [2024-12-15 18:39:22.925344] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:08:22.723 BaseBdev3 00:08:22.723 18:39:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.723 18:39:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:08:22.723 18:39:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.723 18:39:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.723 [2024-12-15 18:39:22.934852] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:22.723 [2024-12-15 18:39:22.937038] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:22.723 [2024-12-15 18:39:22.937226] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:22.723 [2024-12-15 18:39:22.937535] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:08:22.723 [2024-12-15 18:39:22.937613] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:22.723 [2024-12-15 18:39:22.938011] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:08:22.723 [2024-12-15 18:39:22.938267] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:08:22.723 [2024-12-15 18:39:22.938344] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:08:22.723 [2024-12-15 18:39:22.938549] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:22.723 18:39:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.723 18:39:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:08:22.723 18:39:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:22.723 18:39:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:22.723 18:39:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:22.723 18:39:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:22.723 18:39:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:22.723 18:39:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:22.723 18:39:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:22.723 18:39:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:22.723 18:39:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:22.723 18:39:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:22.723 18:39:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:22.723 18:39:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.723 18:39:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.723 18:39:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.723 18:39:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:22.723 "name": "raid_bdev1", 00:08:22.723 "uuid": "e3b842a1-64b3-4b8d-8924-bd48948ea221", 00:08:22.723 "strip_size_kb": 64, 00:08:22.723 "state": "online", 00:08:22.723 "raid_level": "concat", 00:08:22.723 "superblock": true, 00:08:22.723 "num_base_bdevs": 3, 00:08:22.723 "num_base_bdevs_discovered": 3, 00:08:22.723 "num_base_bdevs_operational": 3, 00:08:22.723 "base_bdevs_list": [ 00:08:22.723 { 00:08:22.723 "name": "BaseBdev1", 00:08:22.723 "uuid": "305f5025-3150-5af1-8bb6-424032d190ae", 00:08:22.723 "is_configured": true, 00:08:22.723 "data_offset": 2048, 00:08:22.723 "data_size": 63488 00:08:22.723 }, 00:08:22.723 { 00:08:22.723 "name": "BaseBdev2", 00:08:22.723 "uuid": "23854aea-c5da-5c0a-a607-9635c6bb022a", 00:08:22.723 "is_configured": true, 00:08:22.723 "data_offset": 2048, 00:08:22.723 "data_size": 63488 00:08:22.723 }, 00:08:22.723 { 00:08:22.723 "name": "BaseBdev3", 00:08:22.723 "uuid": "a4d2a8fe-0c34-5bef-a67c-6b182644ec19", 00:08:22.723 "is_configured": true, 00:08:22.723 "data_offset": 2048, 00:08:22.723 "data_size": 63488 00:08:22.723 } 00:08:22.723 ] 00:08:22.723 }' 00:08:22.723 18:39:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:22.723 18:39:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.983 18:39:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:22.983 18:39:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:23.242 [2024-12-15 18:39:23.446322] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:08:24.181 18:39:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:24.181 18:39:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.181 18:39:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.181 18:39:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.181 18:39:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:24.181 18:39:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:08:24.181 18:39:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:08:24.181 18:39:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:08:24.181 18:39:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:24.181 18:39:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:24.181 18:39:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:24.181 18:39:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:24.181 18:39:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:24.181 18:39:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:24.181 18:39:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:24.181 18:39:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:24.181 18:39:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:24.181 18:39:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:24.181 18:39:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:24.181 18:39:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.181 18:39:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.181 18:39:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.181 18:39:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:24.181 "name": "raid_bdev1", 00:08:24.181 "uuid": "e3b842a1-64b3-4b8d-8924-bd48948ea221", 00:08:24.181 "strip_size_kb": 64, 00:08:24.181 "state": "online", 00:08:24.181 "raid_level": "concat", 00:08:24.181 "superblock": true, 00:08:24.181 "num_base_bdevs": 3, 00:08:24.181 "num_base_bdevs_discovered": 3, 00:08:24.181 "num_base_bdevs_operational": 3, 00:08:24.181 "base_bdevs_list": [ 00:08:24.181 { 00:08:24.181 "name": "BaseBdev1", 00:08:24.181 "uuid": "305f5025-3150-5af1-8bb6-424032d190ae", 00:08:24.181 "is_configured": true, 00:08:24.181 "data_offset": 2048, 00:08:24.181 "data_size": 63488 00:08:24.181 }, 00:08:24.181 { 00:08:24.181 "name": "BaseBdev2", 00:08:24.181 "uuid": "23854aea-c5da-5c0a-a607-9635c6bb022a", 00:08:24.181 "is_configured": true, 00:08:24.181 "data_offset": 2048, 00:08:24.181 "data_size": 63488 00:08:24.181 }, 00:08:24.181 { 00:08:24.181 "name": "BaseBdev3", 00:08:24.181 "uuid": "a4d2a8fe-0c34-5bef-a67c-6b182644ec19", 00:08:24.181 "is_configured": true, 00:08:24.181 "data_offset": 2048, 00:08:24.181 "data_size": 63488 00:08:24.181 } 00:08:24.181 ] 00:08:24.181 }' 00:08:24.181 18:39:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:24.181 18:39:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.440 18:39:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:24.440 18:39:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.440 18:39:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.440 [2024-12-15 18:39:24.838358] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:24.440 [2024-12-15 18:39:24.838480] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:24.440 [2024-12-15 18:39:24.841134] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:24.440 [2024-12-15 18:39:24.841230] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:24.440 [2024-12-15 18:39:24.841299] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:24.440 [2024-12-15 18:39:24.841352] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:08:24.440 { 00:08:24.440 "results": [ 00:08:24.440 { 00:08:24.440 "job": "raid_bdev1", 00:08:24.440 "core_mask": "0x1", 00:08:24.440 "workload": "randrw", 00:08:24.440 "percentage": 50, 00:08:24.440 "status": "finished", 00:08:24.440 "queue_depth": 1, 00:08:24.440 "io_size": 131072, 00:08:24.440 "runtime": 1.393097, 00:08:24.440 "iops": 15909.875622444093, 00:08:24.440 "mibps": 1988.7344528055116, 00:08:24.440 "io_failed": 1, 00:08:24.440 "io_timeout": 0, 00:08:24.440 "avg_latency_us": 86.99354129459779, 00:08:24.440 "min_latency_us": 26.047161572052403, 00:08:24.440 "max_latency_us": 1430.9170305676855 00:08:24.440 } 00:08:24.440 ], 00:08:24.440 "core_count": 1 00:08:24.440 } 00:08:24.440 18:39:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.440 18:39:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 80265 00:08:24.440 18:39:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 80265 ']' 00:08:24.440 18:39:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 80265 00:08:24.440 18:39:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:08:24.440 18:39:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:24.440 18:39:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80265 00:08:24.700 18:39:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:24.700 18:39:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:24.700 killing process with pid 80265 00:08:24.700 18:39:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80265' 00:08:24.700 18:39:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 80265 00:08:24.700 [2024-12-15 18:39:24.882041] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:24.700 18:39:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 80265 00:08:24.700 [2024-12-15 18:39:24.908017] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:24.700 18:39:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:24.700 18:39:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.u2Qw6W7BM8 00:08:24.700 18:39:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:24.700 18:39:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:08:24.700 18:39:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:08:24.700 18:39:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:24.700 18:39:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:24.700 18:39:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:08:24.700 00:08:24.700 real 0m3.253s 00:08:24.700 user 0m4.128s 00:08:24.700 sys 0m0.517s 00:08:24.700 ************************************ 00:08:24.700 END TEST raid_write_error_test 00:08:24.700 ************************************ 00:08:24.700 18:39:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:24.700 18:39:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.959 18:39:25 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:24.960 18:39:25 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:08:24.960 18:39:25 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:24.960 18:39:25 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:24.960 18:39:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:24.960 ************************************ 00:08:24.960 START TEST raid_state_function_test 00:08:24.960 ************************************ 00:08:24.960 18:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 false 00:08:24.960 18:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:08:24.960 18:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:24.960 18:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:24.960 18:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:24.960 18:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:24.960 18:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:24.960 18:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:24.960 18:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:24.960 18:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:24.960 18:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:24.960 18:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:24.960 18:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:24.960 18:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:24.960 18:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:24.960 18:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:24.960 18:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:24.960 18:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:24.960 18:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:24.960 18:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:24.960 18:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:24.960 18:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:24.960 18:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:08:24.960 18:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:08:24.960 18:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:24.960 18:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:24.960 18:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=80393 00:08:24.960 Process raid pid: 80393 00:08:24.960 18:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:24.960 18:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80393' 00:08:24.960 18:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 80393 00:08:24.960 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:24.960 18:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 80393 ']' 00:08:24.960 18:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:24.960 18:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:24.960 18:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:24.960 18:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:24.960 18:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.960 [2024-12-15 18:39:25.288847] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:08:24.960 [2024-12-15 18:39:25.289069] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:25.219 [2024-12-15 18:39:25.440604] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:25.219 [2024-12-15 18:39:25.467705] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.219 [2024-12-15 18:39:25.511568] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:25.219 [2024-12-15 18:39:25.511706] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:25.788 18:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:25.788 18:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:08:25.788 18:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:25.788 18:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.788 18:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.788 [2024-12-15 18:39:26.131469] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:25.788 [2024-12-15 18:39:26.131623] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:25.788 [2024-12-15 18:39:26.131639] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:25.788 [2024-12-15 18:39:26.131649] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:25.788 [2024-12-15 18:39:26.131656] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:25.788 [2024-12-15 18:39:26.131670] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:25.788 18:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.788 18:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:25.788 18:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:25.788 18:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:25.788 18:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:25.788 18:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:25.788 18:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:25.788 18:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:25.788 18:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:25.788 18:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:25.788 18:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:25.788 18:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:25.788 18:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.788 18:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.788 18:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:25.788 18:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.788 18:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:25.788 "name": "Existed_Raid", 00:08:25.788 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:25.788 "strip_size_kb": 0, 00:08:25.788 "state": "configuring", 00:08:25.788 "raid_level": "raid1", 00:08:25.788 "superblock": false, 00:08:25.788 "num_base_bdevs": 3, 00:08:25.788 "num_base_bdevs_discovered": 0, 00:08:25.788 "num_base_bdevs_operational": 3, 00:08:25.788 "base_bdevs_list": [ 00:08:25.788 { 00:08:25.789 "name": "BaseBdev1", 00:08:25.789 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:25.789 "is_configured": false, 00:08:25.789 "data_offset": 0, 00:08:25.789 "data_size": 0 00:08:25.789 }, 00:08:25.789 { 00:08:25.789 "name": "BaseBdev2", 00:08:25.789 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:25.789 "is_configured": false, 00:08:25.789 "data_offset": 0, 00:08:25.789 "data_size": 0 00:08:25.789 }, 00:08:25.789 { 00:08:25.789 "name": "BaseBdev3", 00:08:25.789 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:25.789 "is_configured": false, 00:08:25.789 "data_offset": 0, 00:08:25.789 "data_size": 0 00:08:25.789 } 00:08:25.789 ] 00:08:25.789 }' 00:08:25.789 18:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:25.789 18:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.358 18:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:26.358 18:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.358 18:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.358 [2024-12-15 18:39:26.506700] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:26.359 [2024-12-15 18:39:26.506814] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:08:26.359 18:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.359 18:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:26.359 18:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.359 18:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.359 [2024-12-15 18:39:26.518675] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:26.359 [2024-12-15 18:39:26.518759] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:26.359 [2024-12-15 18:39:26.518787] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:26.359 [2024-12-15 18:39:26.518846] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:26.359 [2024-12-15 18:39:26.518868] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:26.359 [2024-12-15 18:39:26.518897] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:26.359 18:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.359 18:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:26.359 18:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.359 18:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.359 [2024-12-15 18:39:26.539756] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:26.359 BaseBdev1 00:08:26.359 18:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.359 18:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:26.359 18:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:26.359 18:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:26.359 18:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:26.359 18:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:26.359 18:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:26.359 18:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:26.359 18:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.359 18:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.359 18:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.359 18:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:26.359 18:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.359 18:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.359 [ 00:08:26.359 { 00:08:26.359 "name": "BaseBdev1", 00:08:26.359 "aliases": [ 00:08:26.359 "d22af091-0035-48c2-8959-7505f31633dc" 00:08:26.359 ], 00:08:26.359 "product_name": "Malloc disk", 00:08:26.359 "block_size": 512, 00:08:26.359 "num_blocks": 65536, 00:08:26.359 "uuid": "d22af091-0035-48c2-8959-7505f31633dc", 00:08:26.359 "assigned_rate_limits": { 00:08:26.359 "rw_ios_per_sec": 0, 00:08:26.359 "rw_mbytes_per_sec": 0, 00:08:26.359 "r_mbytes_per_sec": 0, 00:08:26.359 "w_mbytes_per_sec": 0 00:08:26.359 }, 00:08:26.359 "claimed": true, 00:08:26.359 "claim_type": "exclusive_write", 00:08:26.359 "zoned": false, 00:08:26.359 "supported_io_types": { 00:08:26.359 "read": true, 00:08:26.359 "write": true, 00:08:26.359 "unmap": true, 00:08:26.359 "flush": true, 00:08:26.359 "reset": true, 00:08:26.359 "nvme_admin": false, 00:08:26.359 "nvme_io": false, 00:08:26.359 "nvme_io_md": false, 00:08:26.359 "write_zeroes": true, 00:08:26.359 "zcopy": true, 00:08:26.359 "get_zone_info": false, 00:08:26.359 "zone_management": false, 00:08:26.359 "zone_append": false, 00:08:26.359 "compare": false, 00:08:26.359 "compare_and_write": false, 00:08:26.359 "abort": true, 00:08:26.359 "seek_hole": false, 00:08:26.359 "seek_data": false, 00:08:26.359 "copy": true, 00:08:26.359 "nvme_iov_md": false 00:08:26.359 }, 00:08:26.359 "memory_domains": [ 00:08:26.359 { 00:08:26.359 "dma_device_id": "system", 00:08:26.359 "dma_device_type": 1 00:08:26.359 }, 00:08:26.359 { 00:08:26.359 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:26.359 "dma_device_type": 2 00:08:26.359 } 00:08:26.359 ], 00:08:26.359 "driver_specific": {} 00:08:26.359 } 00:08:26.359 ] 00:08:26.359 18:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.359 18:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:26.359 18:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:26.359 18:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:26.359 18:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:26.359 18:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:26.359 18:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:26.359 18:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:26.359 18:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:26.359 18:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:26.359 18:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:26.359 18:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:26.359 18:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:26.359 18:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:26.359 18:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.359 18:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.359 18:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.359 18:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:26.359 "name": "Existed_Raid", 00:08:26.359 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:26.359 "strip_size_kb": 0, 00:08:26.359 "state": "configuring", 00:08:26.359 "raid_level": "raid1", 00:08:26.359 "superblock": false, 00:08:26.359 "num_base_bdevs": 3, 00:08:26.359 "num_base_bdevs_discovered": 1, 00:08:26.359 "num_base_bdevs_operational": 3, 00:08:26.359 "base_bdevs_list": [ 00:08:26.359 { 00:08:26.359 "name": "BaseBdev1", 00:08:26.359 "uuid": "d22af091-0035-48c2-8959-7505f31633dc", 00:08:26.359 "is_configured": true, 00:08:26.359 "data_offset": 0, 00:08:26.359 "data_size": 65536 00:08:26.359 }, 00:08:26.359 { 00:08:26.359 "name": "BaseBdev2", 00:08:26.359 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:26.359 "is_configured": false, 00:08:26.359 "data_offset": 0, 00:08:26.359 "data_size": 0 00:08:26.359 }, 00:08:26.359 { 00:08:26.359 "name": "BaseBdev3", 00:08:26.359 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:26.359 "is_configured": false, 00:08:26.359 "data_offset": 0, 00:08:26.359 "data_size": 0 00:08:26.359 } 00:08:26.359 ] 00:08:26.359 }' 00:08:26.359 18:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:26.359 18:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.619 18:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:26.619 18:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.619 18:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.619 [2024-12-15 18:39:26.991083] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:26.619 [2024-12-15 18:39:26.991148] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:08:26.619 18:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.619 18:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:26.619 18:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.619 18:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.619 [2024-12-15 18:39:27.003122] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:26.619 [2024-12-15 18:39:27.005303] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:26.619 [2024-12-15 18:39:27.005349] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:26.619 [2024-12-15 18:39:27.005359] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:26.619 [2024-12-15 18:39:27.005386] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:26.619 18:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.619 18:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:26.619 18:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:26.619 18:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:26.619 18:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:26.619 18:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:26.619 18:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:26.619 18:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:26.619 18:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:26.619 18:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:26.619 18:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:26.619 18:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:26.619 18:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:26.619 18:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:26.619 18:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.619 18:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.619 18:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:26.619 18:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.619 18:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:26.619 "name": "Existed_Raid", 00:08:26.619 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:26.619 "strip_size_kb": 0, 00:08:26.619 "state": "configuring", 00:08:26.619 "raid_level": "raid1", 00:08:26.619 "superblock": false, 00:08:26.619 "num_base_bdevs": 3, 00:08:26.619 "num_base_bdevs_discovered": 1, 00:08:26.619 "num_base_bdevs_operational": 3, 00:08:26.619 "base_bdevs_list": [ 00:08:26.619 { 00:08:26.619 "name": "BaseBdev1", 00:08:26.619 "uuid": "d22af091-0035-48c2-8959-7505f31633dc", 00:08:26.619 "is_configured": true, 00:08:26.619 "data_offset": 0, 00:08:26.619 "data_size": 65536 00:08:26.619 }, 00:08:26.619 { 00:08:26.619 "name": "BaseBdev2", 00:08:26.619 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:26.619 "is_configured": false, 00:08:26.619 "data_offset": 0, 00:08:26.619 "data_size": 0 00:08:26.619 }, 00:08:26.619 { 00:08:26.619 "name": "BaseBdev3", 00:08:26.619 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:26.619 "is_configured": false, 00:08:26.619 "data_offset": 0, 00:08:26.619 "data_size": 0 00:08:26.619 } 00:08:26.619 ] 00:08:26.619 }' 00:08:26.619 18:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:26.619 18:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.189 18:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:27.189 18:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.189 18:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.189 [2024-12-15 18:39:27.421632] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:27.189 BaseBdev2 00:08:27.189 18:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.189 18:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:27.189 18:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:27.189 18:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:27.189 18:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:27.189 18:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:27.189 18:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:27.189 18:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:27.189 18:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.189 18:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.189 18:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.189 18:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:27.189 18:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.189 18:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.189 [ 00:08:27.189 { 00:08:27.189 "name": "BaseBdev2", 00:08:27.189 "aliases": [ 00:08:27.189 "7e148b95-c892-4ba1-b54e-524b4c5af25f" 00:08:27.189 ], 00:08:27.189 "product_name": "Malloc disk", 00:08:27.189 "block_size": 512, 00:08:27.189 "num_blocks": 65536, 00:08:27.189 "uuid": "7e148b95-c892-4ba1-b54e-524b4c5af25f", 00:08:27.189 "assigned_rate_limits": { 00:08:27.189 "rw_ios_per_sec": 0, 00:08:27.189 "rw_mbytes_per_sec": 0, 00:08:27.189 "r_mbytes_per_sec": 0, 00:08:27.189 "w_mbytes_per_sec": 0 00:08:27.189 }, 00:08:27.189 "claimed": true, 00:08:27.189 "claim_type": "exclusive_write", 00:08:27.189 "zoned": false, 00:08:27.189 "supported_io_types": { 00:08:27.189 "read": true, 00:08:27.189 "write": true, 00:08:27.189 "unmap": true, 00:08:27.189 "flush": true, 00:08:27.189 "reset": true, 00:08:27.189 "nvme_admin": false, 00:08:27.189 "nvme_io": false, 00:08:27.189 "nvme_io_md": false, 00:08:27.189 "write_zeroes": true, 00:08:27.189 "zcopy": true, 00:08:27.189 "get_zone_info": false, 00:08:27.190 "zone_management": false, 00:08:27.190 "zone_append": false, 00:08:27.190 "compare": false, 00:08:27.190 "compare_and_write": false, 00:08:27.190 "abort": true, 00:08:27.190 "seek_hole": false, 00:08:27.190 "seek_data": false, 00:08:27.190 "copy": true, 00:08:27.190 "nvme_iov_md": false 00:08:27.190 }, 00:08:27.190 "memory_domains": [ 00:08:27.190 { 00:08:27.190 "dma_device_id": "system", 00:08:27.190 "dma_device_type": 1 00:08:27.190 }, 00:08:27.190 { 00:08:27.190 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:27.190 "dma_device_type": 2 00:08:27.190 } 00:08:27.190 ], 00:08:27.190 "driver_specific": {} 00:08:27.190 } 00:08:27.190 ] 00:08:27.190 18:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.190 18:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:27.190 18:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:27.190 18:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:27.190 18:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:27.190 18:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:27.190 18:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:27.190 18:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:27.190 18:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:27.190 18:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:27.190 18:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:27.190 18:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:27.190 18:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:27.190 18:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:27.190 18:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:27.190 18:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:27.190 18:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.190 18:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.190 18:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.190 18:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:27.190 "name": "Existed_Raid", 00:08:27.190 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:27.190 "strip_size_kb": 0, 00:08:27.190 "state": "configuring", 00:08:27.190 "raid_level": "raid1", 00:08:27.190 "superblock": false, 00:08:27.190 "num_base_bdevs": 3, 00:08:27.190 "num_base_bdevs_discovered": 2, 00:08:27.190 "num_base_bdevs_operational": 3, 00:08:27.190 "base_bdevs_list": [ 00:08:27.190 { 00:08:27.190 "name": "BaseBdev1", 00:08:27.190 "uuid": "d22af091-0035-48c2-8959-7505f31633dc", 00:08:27.190 "is_configured": true, 00:08:27.190 "data_offset": 0, 00:08:27.190 "data_size": 65536 00:08:27.190 }, 00:08:27.190 { 00:08:27.190 "name": "BaseBdev2", 00:08:27.190 "uuid": "7e148b95-c892-4ba1-b54e-524b4c5af25f", 00:08:27.190 "is_configured": true, 00:08:27.190 "data_offset": 0, 00:08:27.190 "data_size": 65536 00:08:27.190 }, 00:08:27.190 { 00:08:27.190 "name": "BaseBdev3", 00:08:27.190 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:27.190 "is_configured": false, 00:08:27.190 "data_offset": 0, 00:08:27.190 "data_size": 0 00:08:27.190 } 00:08:27.190 ] 00:08:27.190 }' 00:08:27.190 18:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:27.190 18:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.457 18:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:27.457 18:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.457 18:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.457 [2024-12-15 18:39:27.867690] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:27.457 [2024-12-15 18:39:27.867757] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:08:27.457 [2024-12-15 18:39:27.867775] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:08:27.457 [2024-12-15 18:39:27.868200] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:08:27.457 [2024-12-15 18:39:27.868432] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:08:27.457 [2024-12-15 18:39:27.868464] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:08:27.457 [2024-12-15 18:39:27.868721] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:27.457 BaseBdev3 00:08:27.457 18:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.457 18:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:27.457 18:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:27.457 18:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:27.457 18:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:27.457 18:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:27.457 18:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:27.457 18:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:27.457 18:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.457 18:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.457 18:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.457 18:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:27.457 18:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.457 18:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.736 [ 00:08:27.736 { 00:08:27.736 "name": "BaseBdev3", 00:08:27.736 "aliases": [ 00:08:27.736 "53acb147-e7fa-4f9b-8509-29b27deaff06" 00:08:27.736 ], 00:08:27.736 "product_name": "Malloc disk", 00:08:27.736 "block_size": 512, 00:08:27.736 "num_blocks": 65536, 00:08:27.736 "uuid": "53acb147-e7fa-4f9b-8509-29b27deaff06", 00:08:27.736 "assigned_rate_limits": { 00:08:27.736 "rw_ios_per_sec": 0, 00:08:27.736 "rw_mbytes_per_sec": 0, 00:08:27.736 "r_mbytes_per_sec": 0, 00:08:27.736 "w_mbytes_per_sec": 0 00:08:27.736 }, 00:08:27.736 "claimed": true, 00:08:27.736 "claim_type": "exclusive_write", 00:08:27.736 "zoned": false, 00:08:27.736 "supported_io_types": { 00:08:27.736 "read": true, 00:08:27.736 "write": true, 00:08:27.736 "unmap": true, 00:08:27.736 "flush": true, 00:08:27.736 "reset": true, 00:08:27.736 "nvme_admin": false, 00:08:27.736 "nvme_io": false, 00:08:27.736 "nvme_io_md": false, 00:08:27.736 "write_zeroes": true, 00:08:27.736 "zcopy": true, 00:08:27.736 "get_zone_info": false, 00:08:27.736 "zone_management": false, 00:08:27.736 "zone_append": false, 00:08:27.736 "compare": false, 00:08:27.736 "compare_and_write": false, 00:08:27.736 "abort": true, 00:08:27.736 "seek_hole": false, 00:08:27.736 "seek_data": false, 00:08:27.736 "copy": true, 00:08:27.736 "nvme_iov_md": false 00:08:27.736 }, 00:08:27.736 "memory_domains": [ 00:08:27.736 { 00:08:27.736 "dma_device_id": "system", 00:08:27.736 "dma_device_type": 1 00:08:27.736 }, 00:08:27.736 { 00:08:27.736 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:27.736 "dma_device_type": 2 00:08:27.736 } 00:08:27.736 ], 00:08:27.736 "driver_specific": {} 00:08:27.736 } 00:08:27.736 ] 00:08:27.736 18:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.736 18:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:27.736 18:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:27.736 18:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:27.736 18:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:08:27.736 18:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:27.736 18:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:27.736 18:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:27.736 18:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:27.736 18:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:27.736 18:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:27.736 18:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:27.736 18:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:27.736 18:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:27.736 18:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:27.736 18:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:27.736 18:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.736 18:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.736 18:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.736 18:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:27.736 "name": "Existed_Raid", 00:08:27.736 "uuid": "4efee2a3-9970-4ca9-9cda-5be5cc99e24b", 00:08:27.736 "strip_size_kb": 0, 00:08:27.736 "state": "online", 00:08:27.736 "raid_level": "raid1", 00:08:27.736 "superblock": false, 00:08:27.736 "num_base_bdevs": 3, 00:08:27.736 "num_base_bdevs_discovered": 3, 00:08:27.736 "num_base_bdevs_operational": 3, 00:08:27.736 "base_bdevs_list": [ 00:08:27.736 { 00:08:27.736 "name": "BaseBdev1", 00:08:27.736 "uuid": "d22af091-0035-48c2-8959-7505f31633dc", 00:08:27.736 "is_configured": true, 00:08:27.736 "data_offset": 0, 00:08:27.736 "data_size": 65536 00:08:27.736 }, 00:08:27.736 { 00:08:27.736 "name": "BaseBdev2", 00:08:27.736 "uuid": "7e148b95-c892-4ba1-b54e-524b4c5af25f", 00:08:27.736 "is_configured": true, 00:08:27.736 "data_offset": 0, 00:08:27.736 "data_size": 65536 00:08:27.736 }, 00:08:27.736 { 00:08:27.736 "name": "BaseBdev3", 00:08:27.736 "uuid": "53acb147-e7fa-4f9b-8509-29b27deaff06", 00:08:27.736 "is_configured": true, 00:08:27.736 "data_offset": 0, 00:08:27.736 "data_size": 65536 00:08:27.736 } 00:08:27.736 ] 00:08:27.736 }' 00:08:27.736 18:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:27.736 18:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.996 18:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:27.996 18:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:27.996 18:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:27.996 18:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:27.996 18:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:27.996 18:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:27.996 18:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:27.996 18:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:27.996 18:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.996 18:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.996 [2024-12-15 18:39:28.343258] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:27.996 18:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.996 18:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:27.996 "name": "Existed_Raid", 00:08:27.996 "aliases": [ 00:08:27.996 "4efee2a3-9970-4ca9-9cda-5be5cc99e24b" 00:08:27.996 ], 00:08:27.996 "product_name": "Raid Volume", 00:08:27.996 "block_size": 512, 00:08:27.996 "num_blocks": 65536, 00:08:27.996 "uuid": "4efee2a3-9970-4ca9-9cda-5be5cc99e24b", 00:08:27.996 "assigned_rate_limits": { 00:08:27.996 "rw_ios_per_sec": 0, 00:08:27.996 "rw_mbytes_per_sec": 0, 00:08:27.996 "r_mbytes_per_sec": 0, 00:08:27.996 "w_mbytes_per_sec": 0 00:08:27.996 }, 00:08:27.996 "claimed": false, 00:08:27.996 "zoned": false, 00:08:27.996 "supported_io_types": { 00:08:27.996 "read": true, 00:08:27.996 "write": true, 00:08:27.996 "unmap": false, 00:08:27.996 "flush": false, 00:08:27.996 "reset": true, 00:08:27.996 "nvme_admin": false, 00:08:27.996 "nvme_io": false, 00:08:27.996 "nvme_io_md": false, 00:08:27.996 "write_zeroes": true, 00:08:27.996 "zcopy": false, 00:08:27.996 "get_zone_info": false, 00:08:27.996 "zone_management": false, 00:08:27.996 "zone_append": false, 00:08:27.996 "compare": false, 00:08:27.996 "compare_and_write": false, 00:08:27.996 "abort": false, 00:08:27.996 "seek_hole": false, 00:08:27.996 "seek_data": false, 00:08:27.996 "copy": false, 00:08:27.996 "nvme_iov_md": false 00:08:27.996 }, 00:08:27.996 "memory_domains": [ 00:08:27.996 { 00:08:27.996 "dma_device_id": "system", 00:08:27.996 "dma_device_type": 1 00:08:27.996 }, 00:08:27.996 { 00:08:27.996 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:27.996 "dma_device_type": 2 00:08:27.996 }, 00:08:27.996 { 00:08:27.996 "dma_device_id": "system", 00:08:27.996 "dma_device_type": 1 00:08:27.996 }, 00:08:27.996 { 00:08:27.996 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:27.996 "dma_device_type": 2 00:08:27.996 }, 00:08:27.996 { 00:08:27.996 "dma_device_id": "system", 00:08:27.996 "dma_device_type": 1 00:08:27.996 }, 00:08:27.996 { 00:08:27.996 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:27.996 "dma_device_type": 2 00:08:27.996 } 00:08:27.996 ], 00:08:27.996 "driver_specific": { 00:08:27.996 "raid": { 00:08:27.996 "uuid": "4efee2a3-9970-4ca9-9cda-5be5cc99e24b", 00:08:27.997 "strip_size_kb": 0, 00:08:27.997 "state": "online", 00:08:27.997 "raid_level": "raid1", 00:08:27.997 "superblock": false, 00:08:27.997 "num_base_bdevs": 3, 00:08:27.997 "num_base_bdevs_discovered": 3, 00:08:27.997 "num_base_bdevs_operational": 3, 00:08:27.997 "base_bdevs_list": [ 00:08:27.997 { 00:08:27.997 "name": "BaseBdev1", 00:08:27.997 "uuid": "d22af091-0035-48c2-8959-7505f31633dc", 00:08:27.997 "is_configured": true, 00:08:27.997 "data_offset": 0, 00:08:27.997 "data_size": 65536 00:08:27.997 }, 00:08:27.997 { 00:08:27.997 "name": "BaseBdev2", 00:08:27.997 "uuid": "7e148b95-c892-4ba1-b54e-524b4c5af25f", 00:08:27.997 "is_configured": true, 00:08:27.997 "data_offset": 0, 00:08:27.997 "data_size": 65536 00:08:27.997 }, 00:08:27.997 { 00:08:27.997 "name": "BaseBdev3", 00:08:27.997 "uuid": "53acb147-e7fa-4f9b-8509-29b27deaff06", 00:08:27.997 "is_configured": true, 00:08:27.997 "data_offset": 0, 00:08:27.997 "data_size": 65536 00:08:27.997 } 00:08:27.997 ] 00:08:27.997 } 00:08:27.997 } 00:08:27.997 }' 00:08:27.997 18:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:27.997 18:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:27.997 BaseBdev2 00:08:27.997 BaseBdev3' 00:08:27.997 18:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:28.257 18:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:28.257 18:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:28.257 18:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:28.257 18:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.257 18:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.257 18:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:28.257 18:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.257 18:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:28.257 18:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:28.257 18:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:28.257 18:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:28.257 18:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.257 18:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:28.257 18:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.257 18:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.257 18:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:28.257 18:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:28.257 18:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:28.257 18:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:28.257 18:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:28.257 18:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.257 18:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.257 18:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.257 18:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:28.257 18:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:28.257 18:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:28.257 18:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.257 18:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.257 [2024-12-15 18:39:28.626520] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:28.257 18:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.257 18:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:28.257 18:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:08:28.257 18:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:28.257 18:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:28.257 18:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:08:28.257 18:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:08:28.257 18:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:28.257 18:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:28.257 18:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:28.257 18:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:28.257 18:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:28.257 18:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:28.257 18:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:28.257 18:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:28.257 18:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:28.257 18:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:28.257 18:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:28.257 18:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.257 18:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.257 18:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.257 18:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:28.257 "name": "Existed_Raid", 00:08:28.257 "uuid": "4efee2a3-9970-4ca9-9cda-5be5cc99e24b", 00:08:28.257 "strip_size_kb": 0, 00:08:28.257 "state": "online", 00:08:28.257 "raid_level": "raid1", 00:08:28.257 "superblock": false, 00:08:28.257 "num_base_bdevs": 3, 00:08:28.257 "num_base_bdevs_discovered": 2, 00:08:28.257 "num_base_bdevs_operational": 2, 00:08:28.257 "base_bdevs_list": [ 00:08:28.257 { 00:08:28.257 "name": null, 00:08:28.257 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:28.257 "is_configured": false, 00:08:28.257 "data_offset": 0, 00:08:28.257 "data_size": 65536 00:08:28.257 }, 00:08:28.257 { 00:08:28.258 "name": "BaseBdev2", 00:08:28.258 "uuid": "7e148b95-c892-4ba1-b54e-524b4c5af25f", 00:08:28.258 "is_configured": true, 00:08:28.258 "data_offset": 0, 00:08:28.258 "data_size": 65536 00:08:28.258 }, 00:08:28.258 { 00:08:28.258 "name": "BaseBdev3", 00:08:28.258 "uuid": "53acb147-e7fa-4f9b-8509-29b27deaff06", 00:08:28.258 "is_configured": true, 00:08:28.258 "data_offset": 0, 00:08:28.258 "data_size": 65536 00:08:28.258 } 00:08:28.258 ] 00:08:28.258 }' 00:08:28.258 18:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:28.258 18:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.826 18:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:28.827 18:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:28.827 18:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:28.827 18:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.827 18:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.827 18:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:28.827 18:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.827 18:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:28.827 18:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:28.827 18:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:28.827 18:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.827 18:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.827 [2024-12-15 18:39:29.149473] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:28.827 18:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.827 18:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:28.827 18:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:28.827 18:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:28.827 18:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:28.827 18:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.827 18:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.827 18:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.827 18:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:28.827 18:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:28.827 18:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:28.827 18:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.827 18:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.827 [2024-12-15 18:39:29.216784] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:28.827 [2024-12-15 18:39:29.216897] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:28.827 [2024-12-15 18:39:29.228765] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:28.827 [2024-12-15 18:39:29.228830] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:28.827 [2024-12-15 18:39:29.228845] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:08:28.827 18:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.827 18:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:28.827 18:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:28.827 18:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:28.827 18:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:28.827 18:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.827 18:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.827 18:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.827 18:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:28.827 18:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:28.827 18:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:29.087 18:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:29.087 18:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:29.087 18:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:29.087 18:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.087 18:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.087 BaseBdev2 00:08:29.087 18:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.087 18:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:29.087 18:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:29.087 18:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:29.087 18:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:29.087 18:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:29.087 18:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:29.087 18:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:29.087 18:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.087 18:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.087 18:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.087 18:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:29.087 18:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.087 18:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.087 [ 00:08:29.087 { 00:08:29.087 "name": "BaseBdev2", 00:08:29.087 "aliases": [ 00:08:29.087 "0a61dbce-f3cf-4577-aef8-227730c5c812" 00:08:29.087 ], 00:08:29.087 "product_name": "Malloc disk", 00:08:29.087 "block_size": 512, 00:08:29.087 "num_blocks": 65536, 00:08:29.087 "uuid": "0a61dbce-f3cf-4577-aef8-227730c5c812", 00:08:29.087 "assigned_rate_limits": { 00:08:29.087 "rw_ios_per_sec": 0, 00:08:29.087 "rw_mbytes_per_sec": 0, 00:08:29.087 "r_mbytes_per_sec": 0, 00:08:29.087 "w_mbytes_per_sec": 0 00:08:29.087 }, 00:08:29.087 "claimed": false, 00:08:29.087 "zoned": false, 00:08:29.087 "supported_io_types": { 00:08:29.087 "read": true, 00:08:29.087 "write": true, 00:08:29.087 "unmap": true, 00:08:29.087 "flush": true, 00:08:29.087 "reset": true, 00:08:29.087 "nvme_admin": false, 00:08:29.087 "nvme_io": false, 00:08:29.087 "nvme_io_md": false, 00:08:29.087 "write_zeroes": true, 00:08:29.087 "zcopy": true, 00:08:29.087 "get_zone_info": false, 00:08:29.087 "zone_management": false, 00:08:29.087 "zone_append": false, 00:08:29.087 "compare": false, 00:08:29.087 "compare_and_write": false, 00:08:29.087 "abort": true, 00:08:29.087 "seek_hole": false, 00:08:29.087 "seek_data": false, 00:08:29.087 "copy": true, 00:08:29.087 "nvme_iov_md": false 00:08:29.087 }, 00:08:29.087 "memory_domains": [ 00:08:29.087 { 00:08:29.087 "dma_device_id": "system", 00:08:29.087 "dma_device_type": 1 00:08:29.087 }, 00:08:29.087 { 00:08:29.087 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:29.087 "dma_device_type": 2 00:08:29.087 } 00:08:29.087 ], 00:08:29.087 "driver_specific": {} 00:08:29.087 } 00:08:29.087 ] 00:08:29.087 18:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.087 18:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:29.087 18:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:29.087 18:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:29.087 18:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:29.087 18:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.087 18:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.087 BaseBdev3 00:08:29.087 18:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.087 18:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:29.087 18:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:29.087 18:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:29.088 18:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:29.088 18:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:29.088 18:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:29.088 18:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:29.088 18:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.088 18:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.088 18:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.088 18:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:29.088 18:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.088 18:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.088 [ 00:08:29.088 { 00:08:29.088 "name": "BaseBdev3", 00:08:29.088 "aliases": [ 00:08:29.088 "9038be3c-2fb5-4921-bf8e-1ef7d9583255" 00:08:29.088 ], 00:08:29.088 "product_name": "Malloc disk", 00:08:29.088 "block_size": 512, 00:08:29.088 "num_blocks": 65536, 00:08:29.088 "uuid": "9038be3c-2fb5-4921-bf8e-1ef7d9583255", 00:08:29.088 "assigned_rate_limits": { 00:08:29.088 "rw_ios_per_sec": 0, 00:08:29.088 "rw_mbytes_per_sec": 0, 00:08:29.088 "r_mbytes_per_sec": 0, 00:08:29.088 "w_mbytes_per_sec": 0 00:08:29.088 }, 00:08:29.088 "claimed": false, 00:08:29.088 "zoned": false, 00:08:29.088 "supported_io_types": { 00:08:29.088 "read": true, 00:08:29.088 "write": true, 00:08:29.088 "unmap": true, 00:08:29.088 "flush": true, 00:08:29.088 "reset": true, 00:08:29.088 "nvme_admin": false, 00:08:29.088 "nvme_io": false, 00:08:29.088 "nvme_io_md": false, 00:08:29.088 "write_zeroes": true, 00:08:29.088 "zcopy": true, 00:08:29.088 "get_zone_info": false, 00:08:29.088 "zone_management": false, 00:08:29.088 "zone_append": false, 00:08:29.088 "compare": false, 00:08:29.088 "compare_and_write": false, 00:08:29.088 "abort": true, 00:08:29.088 "seek_hole": false, 00:08:29.088 "seek_data": false, 00:08:29.088 "copy": true, 00:08:29.088 "nvme_iov_md": false 00:08:29.088 }, 00:08:29.088 "memory_domains": [ 00:08:29.088 { 00:08:29.088 "dma_device_id": "system", 00:08:29.088 "dma_device_type": 1 00:08:29.088 }, 00:08:29.088 { 00:08:29.088 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:29.088 "dma_device_type": 2 00:08:29.088 } 00:08:29.088 ], 00:08:29.088 "driver_specific": {} 00:08:29.088 } 00:08:29.088 ] 00:08:29.088 18:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.088 18:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:29.088 18:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:29.088 18:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:29.088 18:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:29.088 18:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.088 18:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.088 [2024-12-15 18:39:29.378129] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:29.088 [2024-12-15 18:39:29.378181] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:29.088 [2024-12-15 18:39:29.378200] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:29.088 [2024-12-15 18:39:29.380228] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:29.088 18:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.088 18:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:29.088 18:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:29.088 18:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:29.088 18:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:29.088 18:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:29.088 18:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:29.088 18:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:29.088 18:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:29.088 18:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:29.088 18:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:29.088 18:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:29.088 18:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.088 18:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.088 18:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:29.088 18:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.088 18:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:29.088 "name": "Existed_Raid", 00:08:29.088 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:29.088 "strip_size_kb": 0, 00:08:29.088 "state": "configuring", 00:08:29.088 "raid_level": "raid1", 00:08:29.088 "superblock": false, 00:08:29.088 "num_base_bdevs": 3, 00:08:29.088 "num_base_bdevs_discovered": 2, 00:08:29.088 "num_base_bdevs_operational": 3, 00:08:29.088 "base_bdevs_list": [ 00:08:29.088 { 00:08:29.088 "name": "BaseBdev1", 00:08:29.088 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:29.088 "is_configured": false, 00:08:29.088 "data_offset": 0, 00:08:29.088 "data_size": 0 00:08:29.088 }, 00:08:29.088 { 00:08:29.088 "name": "BaseBdev2", 00:08:29.088 "uuid": "0a61dbce-f3cf-4577-aef8-227730c5c812", 00:08:29.088 "is_configured": true, 00:08:29.088 "data_offset": 0, 00:08:29.088 "data_size": 65536 00:08:29.088 }, 00:08:29.088 { 00:08:29.088 "name": "BaseBdev3", 00:08:29.088 "uuid": "9038be3c-2fb5-4921-bf8e-1ef7d9583255", 00:08:29.088 "is_configured": true, 00:08:29.088 "data_offset": 0, 00:08:29.088 "data_size": 65536 00:08:29.088 } 00:08:29.088 ] 00:08:29.088 }' 00:08:29.088 18:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:29.088 18:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.658 18:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:29.658 18:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.658 18:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.658 [2024-12-15 18:39:29.825448] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:29.658 18:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.658 18:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:29.658 18:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:29.658 18:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:29.658 18:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:29.658 18:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:29.658 18:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:29.658 18:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:29.658 18:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:29.658 18:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:29.658 18:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:29.658 18:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:29.658 18:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.658 18:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.658 18:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:29.658 18:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.658 18:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:29.658 "name": "Existed_Raid", 00:08:29.658 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:29.658 "strip_size_kb": 0, 00:08:29.658 "state": "configuring", 00:08:29.658 "raid_level": "raid1", 00:08:29.658 "superblock": false, 00:08:29.658 "num_base_bdevs": 3, 00:08:29.658 "num_base_bdevs_discovered": 1, 00:08:29.658 "num_base_bdevs_operational": 3, 00:08:29.658 "base_bdevs_list": [ 00:08:29.658 { 00:08:29.658 "name": "BaseBdev1", 00:08:29.658 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:29.658 "is_configured": false, 00:08:29.658 "data_offset": 0, 00:08:29.658 "data_size": 0 00:08:29.658 }, 00:08:29.658 { 00:08:29.658 "name": null, 00:08:29.658 "uuid": "0a61dbce-f3cf-4577-aef8-227730c5c812", 00:08:29.658 "is_configured": false, 00:08:29.658 "data_offset": 0, 00:08:29.658 "data_size": 65536 00:08:29.658 }, 00:08:29.658 { 00:08:29.658 "name": "BaseBdev3", 00:08:29.658 "uuid": "9038be3c-2fb5-4921-bf8e-1ef7d9583255", 00:08:29.658 "is_configured": true, 00:08:29.658 "data_offset": 0, 00:08:29.658 "data_size": 65536 00:08:29.658 } 00:08:29.658 ] 00:08:29.658 }' 00:08:29.658 18:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:29.658 18:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.918 18:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:29.918 18:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:29.918 18:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.918 18:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.918 18:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.918 18:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:29.918 18:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:29.918 18:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.918 18:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.918 [2024-12-15 18:39:30.267954] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:29.918 BaseBdev1 00:08:29.918 18:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.918 18:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:29.918 18:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:29.918 18:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:29.918 18:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:29.918 18:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:29.918 18:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:29.918 18:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:29.918 18:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.918 18:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.918 18:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.918 18:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:29.918 18:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.918 18:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.918 [ 00:08:29.918 { 00:08:29.918 "name": "BaseBdev1", 00:08:29.918 "aliases": [ 00:08:29.918 "880b4ccf-b1c6-40f5-9190-3f109f2745c8" 00:08:29.918 ], 00:08:29.918 "product_name": "Malloc disk", 00:08:29.918 "block_size": 512, 00:08:29.918 "num_blocks": 65536, 00:08:29.918 "uuid": "880b4ccf-b1c6-40f5-9190-3f109f2745c8", 00:08:29.918 "assigned_rate_limits": { 00:08:29.918 "rw_ios_per_sec": 0, 00:08:29.918 "rw_mbytes_per_sec": 0, 00:08:29.918 "r_mbytes_per_sec": 0, 00:08:29.918 "w_mbytes_per_sec": 0 00:08:29.918 }, 00:08:29.918 "claimed": true, 00:08:29.918 "claim_type": "exclusive_write", 00:08:29.918 "zoned": false, 00:08:29.918 "supported_io_types": { 00:08:29.918 "read": true, 00:08:29.918 "write": true, 00:08:29.918 "unmap": true, 00:08:29.918 "flush": true, 00:08:29.918 "reset": true, 00:08:29.918 "nvme_admin": false, 00:08:29.918 "nvme_io": false, 00:08:29.918 "nvme_io_md": false, 00:08:29.918 "write_zeroes": true, 00:08:29.918 "zcopy": true, 00:08:29.918 "get_zone_info": false, 00:08:29.918 "zone_management": false, 00:08:29.918 "zone_append": false, 00:08:29.918 "compare": false, 00:08:29.918 "compare_and_write": false, 00:08:29.918 "abort": true, 00:08:29.918 "seek_hole": false, 00:08:29.918 "seek_data": false, 00:08:29.918 "copy": true, 00:08:29.918 "nvme_iov_md": false 00:08:29.918 }, 00:08:29.918 "memory_domains": [ 00:08:29.918 { 00:08:29.918 "dma_device_id": "system", 00:08:29.918 "dma_device_type": 1 00:08:29.918 }, 00:08:29.918 { 00:08:29.918 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:29.918 "dma_device_type": 2 00:08:29.919 } 00:08:29.919 ], 00:08:29.919 "driver_specific": {} 00:08:29.919 } 00:08:29.919 ] 00:08:29.919 18:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.919 18:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:29.919 18:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:29.919 18:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:29.919 18:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:29.919 18:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:29.919 18:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:29.919 18:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:29.919 18:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:29.919 18:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:29.919 18:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:29.919 18:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:29.919 18:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:29.919 18:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.919 18:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.919 18:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:29.919 18:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.919 18:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:29.919 "name": "Existed_Raid", 00:08:29.919 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:29.919 "strip_size_kb": 0, 00:08:29.919 "state": "configuring", 00:08:29.919 "raid_level": "raid1", 00:08:29.919 "superblock": false, 00:08:29.919 "num_base_bdevs": 3, 00:08:29.919 "num_base_bdevs_discovered": 2, 00:08:29.919 "num_base_bdevs_operational": 3, 00:08:29.919 "base_bdevs_list": [ 00:08:29.919 { 00:08:29.919 "name": "BaseBdev1", 00:08:29.919 "uuid": "880b4ccf-b1c6-40f5-9190-3f109f2745c8", 00:08:29.919 "is_configured": true, 00:08:29.919 "data_offset": 0, 00:08:29.919 "data_size": 65536 00:08:29.919 }, 00:08:29.919 { 00:08:29.919 "name": null, 00:08:29.919 "uuid": "0a61dbce-f3cf-4577-aef8-227730c5c812", 00:08:29.919 "is_configured": false, 00:08:29.919 "data_offset": 0, 00:08:29.919 "data_size": 65536 00:08:29.919 }, 00:08:29.919 { 00:08:29.919 "name": "BaseBdev3", 00:08:29.919 "uuid": "9038be3c-2fb5-4921-bf8e-1ef7d9583255", 00:08:29.919 "is_configured": true, 00:08:29.919 "data_offset": 0, 00:08:29.919 "data_size": 65536 00:08:29.919 } 00:08:29.919 ] 00:08:29.919 }' 00:08:29.919 18:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:29.919 18:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.488 18:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:30.488 18:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:30.488 18:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.488 18:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.488 18:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.488 18:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:30.488 18:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:30.488 18:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.488 18:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.488 [2024-12-15 18:39:30.787215] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:30.488 18:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.488 18:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:30.488 18:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:30.488 18:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:30.488 18:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:30.488 18:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:30.488 18:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:30.488 18:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:30.488 18:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:30.488 18:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:30.488 18:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:30.488 18:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:30.488 18:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:30.488 18:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.488 18:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.488 18:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.488 18:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:30.488 "name": "Existed_Raid", 00:08:30.488 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:30.488 "strip_size_kb": 0, 00:08:30.488 "state": "configuring", 00:08:30.488 "raid_level": "raid1", 00:08:30.488 "superblock": false, 00:08:30.488 "num_base_bdevs": 3, 00:08:30.488 "num_base_bdevs_discovered": 1, 00:08:30.488 "num_base_bdevs_operational": 3, 00:08:30.488 "base_bdevs_list": [ 00:08:30.488 { 00:08:30.489 "name": "BaseBdev1", 00:08:30.489 "uuid": "880b4ccf-b1c6-40f5-9190-3f109f2745c8", 00:08:30.489 "is_configured": true, 00:08:30.489 "data_offset": 0, 00:08:30.489 "data_size": 65536 00:08:30.489 }, 00:08:30.489 { 00:08:30.489 "name": null, 00:08:30.489 "uuid": "0a61dbce-f3cf-4577-aef8-227730c5c812", 00:08:30.489 "is_configured": false, 00:08:30.489 "data_offset": 0, 00:08:30.489 "data_size": 65536 00:08:30.489 }, 00:08:30.489 { 00:08:30.489 "name": null, 00:08:30.489 "uuid": "9038be3c-2fb5-4921-bf8e-1ef7d9583255", 00:08:30.489 "is_configured": false, 00:08:30.489 "data_offset": 0, 00:08:30.489 "data_size": 65536 00:08:30.489 } 00:08:30.489 ] 00:08:30.489 }' 00:08:30.489 18:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:30.489 18:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.748 18:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:30.748 18:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:30.748 18:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.748 18:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.009 18:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.009 18:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:31.009 18:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:31.009 18:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.009 18:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.009 [2024-12-15 18:39:31.226475] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:31.009 18:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.009 18:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:31.009 18:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:31.009 18:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:31.009 18:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:31.009 18:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:31.009 18:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:31.009 18:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:31.009 18:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:31.009 18:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:31.009 18:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:31.009 18:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:31.009 18:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.009 18:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.009 18:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:31.009 18:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.009 18:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:31.009 "name": "Existed_Raid", 00:08:31.009 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:31.009 "strip_size_kb": 0, 00:08:31.009 "state": "configuring", 00:08:31.009 "raid_level": "raid1", 00:08:31.009 "superblock": false, 00:08:31.009 "num_base_bdevs": 3, 00:08:31.009 "num_base_bdevs_discovered": 2, 00:08:31.009 "num_base_bdevs_operational": 3, 00:08:31.009 "base_bdevs_list": [ 00:08:31.009 { 00:08:31.009 "name": "BaseBdev1", 00:08:31.009 "uuid": "880b4ccf-b1c6-40f5-9190-3f109f2745c8", 00:08:31.009 "is_configured": true, 00:08:31.009 "data_offset": 0, 00:08:31.009 "data_size": 65536 00:08:31.009 }, 00:08:31.009 { 00:08:31.009 "name": null, 00:08:31.009 "uuid": "0a61dbce-f3cf-4577-aef8-227730c5c812", 00:08:31.009 "is_configured": false, 00:08:31.009 "data_offset": 0, 00:08:31.009 "data_size": 65536 00:08:31.009 }, 00:08:31.009 { 00:08:31.009 "name": "BaseBdev3", 00:08:31.009 "uuid": "9038be3c-2fb5-4921-bf8e-1ef7d9583255", 00:08:31.009 "is_configured": true, 00:08:31.009 "data_offset": 0, 00:08:31.009 "data_size": 65536 00:08:31.009 } 00:08:31.009 ] 00:08:31.009 }' 00:08:31.009 18:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:31.009 18:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.269 18:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:31.269 18:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.269 18:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.269 18:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:31.269 18:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.269 18:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:31.269 18:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:31.269 18:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.269 18:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.529 [2024-12-15 18:39:31.713747] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:31.529 18:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.529 18:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:31.529 18:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:31.529 18:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:31.529 18:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:31.529 18:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:31.529 18:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:31.529 18:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:31.529 18:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:31.529 18:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:31.529 18:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:31.529 18:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:31.529 18:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.529 18:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.529 18:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:31.529 18:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.529 18:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:31.529 "name": "Existed_Raid", 00:08:31.529 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:31.529 "strip_size_kb": 0, 00:08:31.529 "state": "configuring", 00:08:31.529 "raid_level": "raid1", 00:08:31.529 "superblock": false, 00:08:31.529 "num_base_bdevs": 3, 00:08:31.529 "num_base_bdevs_discovered": 1, 00:08:31.529 "num_base_bdevs_operational": 3, 00:08:31.529 "base_bdevs_list": [ 00:08:31.529 { 00:08:31.529 "name": null, 00:08:31.529 "uuid": "880b4ccf-b1c6-40f5-9190-3f109f2745c8", 00:08:31.529 "is_configured": false, 00:08:31.529 "data_offset": 0, 00:08:31.529 "data_size": 65536 00:08:31.529 }, 00:08:31.529 { 00:08:31.529 "name": null, 00:08:31.529 "uuid": "0a61dbce-f3cf-4577-aef8-227730c5c812", 00:08:31.529 "is_configured": false, 00:08:31.529 "data_offset": 0, 00:08:31.529 "data_size": 65536 00:08:31.529 }, 00:08:31.529 { 00:08:31.529 "name": "BaseBdev3", 00:08:31.529 "uuid": "9038be3c-2fb5-4921-bf8e-1ef7d9583255", 00:08:31.529 "is_configured": true, 00:08:31.529 "data_offset": 0, 00:08:31.529 "data_size": 65536 00:08:31.529 } 00:08:31.529 ] 00:08:31.529 }' 00:08:31.529 18:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:31.529 18:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.789 18:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:31.789 18:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.789 18:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.789 18:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:31.789 18:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.789 18:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:31.789 18:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:31.789 18:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.789 18:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.789 [2024-12-15 18:39:32.191980] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:31.789 18:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.789 18:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:31.789 18:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:31.789 18:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:31.789 18:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:31.789 18:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:31.789 18:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:31.789 18:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:31.789 18:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:31.789 18:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:31.789 18:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:31.789 18:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:31.789 18:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.789 18:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.789 18:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:31.789 18:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.049 18:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:32.049 "name": "Existed_Raid", 00:08:32.049 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:32.049 "strip_size_kb": 0, 00:08:32.049 "state": "configuring", 00:08:32.049 "raid_level": "raid1", 00:08:32.049 "superblock": false, 00:08:32.049 "num_base_bdevs": 3, 00:08:32.049 "num_base_bdevs_discovered": 2, 00:08:32.049 "num_base_bdevs_operational": 3, 00:08:32.049 "base_bdevs_list": [ 00:08:32.049 { 00:08:32.049 "name": null, 00:08:32.049 "uuid": "880b4ccf-b1c6-40f5-9190-3f109f2745c8", 00:08:32.049 "is_configured": false, 00:08:32.049 "data_offset": 0, 00:08:32.049 "data_size": 65536 00:08:32.049 }, 00:08:32.049 { 00:08:32.049 "name": "BaseBdev2", 00:08:32.049 "uuid": "0a61dbce-f3cf-4577-aef8-227730c5c812", 00:08:32.049 "is_configured": true, 00:08:32.049 "data_offset": 0, 00:08:32.049 "data_size": 65536 00:08:32.050 }, 00:08:32.050 { 00:08:32.050 "name": "BaseBdev3", 00:08:32.050 "uuid": "9038be3c-2fb5-4921-bf8e-1ef7d9583255", 00:08:32.050 "is_configured": true, 00:08:32.050 "data_offset": 0, 00:08:32.050 "data_size": 65536 00:08:32.050 } 00:08:32.050 ] 00:08:32.050 }' 00:08:32.050 18:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:32.050 18:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.309 18:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:32.309 18:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:32.309 18:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.309 18:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.309 18:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.309 18:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:32.309 18:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:32.309 18:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.309 18:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.310 18:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:32.310 18:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.310 18:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 880b4ccf-b1c6-40f5-9190-3f109f2745c8 00:08:32.310 18:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.310 18:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.569 [2024-12-15 18:39:32.754410] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:32.569 [2024-12-15 18:39:32.754470] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:08:32.569 [2024-12-15 18:39:32.754479] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:08:32.569 [2024-12-15 18:39:32.754763] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:08:32.569 [2024-12-15 18:39:32.754920] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:08:32.569 [2024-12-15 18:39:32.754946] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:08:32.569 [2024-12-15 18:39:32.755153] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:32.569 NewBaseBdev 00:08:32.569 18:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.569 18:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:32.569 18:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:08:32.569 18:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:32.569 18:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:32.569 18:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:32.569 18:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:32.570 18:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:32.570 18:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.570 18:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.570 18:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.570 18:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:32.570 18:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.570 18:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.570 [ 00:08:32.570 { 00:08:32.570 "name": "NewBaseBdev", 00:08:32.570 "aliases": [ 00:08:32.570 "880b4ccf-b1c6-40f5-9190-3f109f2745c8" 00:08:32.570 ], 00:08:32.570 "product_name": "Malloc disk", 00:08:32.570 "block_size": 512, 00:08:32.570 "num_blocks": 65536, 00:08:32.570 "uuid": "880b4ccf-b1c6-40f5-9190-3f109f2745c8", 00:08:32.570 "assigned_rate_limits": { 00:08:32.570 "rw_ios_per_sec": 0, 00:08:32.570 "rw_mbytes_per_sec": 0, 00:08:32.570 "r_mbytes_per_sec": 0, 00:08:32.570 "w_mbytes_per_sec": 0 00:08:32.570 }, 00:08:32.570 "claimed": true, 00:08:32.570 "claim_type": "exclusive_write", 00:08:32.570 "zoned": false, 00:08:32.570 "supported_io_types": { 00:08:32.570 "read": true, 00:08:32.570 "write": true, 00:08:32.570 "unmap": true, 00:08:32.570 "flush": true, 00:08:32.570 "reset": true, 00:08:32.570 "nvme_admin": false, 00:08:32.570 "nvme_io": false, 00:08:32.570 "nvme_io_md": false, 00:08:32.570 "write_zeroes": true, 00:08:32.570 "zcopy": true, 00:08:32.570 "get_zone_info": false, 00:08:32.570 "zone_management": false, 00:08:32.570 "zone_append": false, 00:08:32.570 "compare": false, 00:08:32.570 "compare_and_write": false, 00:08:32.570 "abort": true, 00:08:32.570 "seek_hole": false, 00:08:32.570 "seek_data": false, 00:08:32.570 "copy": true, 00:08:32.570 "nvme_iov_md": false 00:08:32.570 }, 00:08:32.570 "memory_domains": [ 00:08:32.570 { 00:08:32.570 "dma_device_id": "system", 00:08:32.570 "dma_device_type": 1 00:08:32.570 }, 00:08:32.570 { 00:08:32.570 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:32.570 "dma_device_type": 2 00:08:32.570 } 00:08:32.570 ], 00:08:32.570 "driver_specific": {} 00:08:32.570 } 00:08:32.570 ] 00:08:32.570 18:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.570 18:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:32.570 18:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:08:32.570 18:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:32.570 18:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:32.570 18:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:32.570 18:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:32.570 18:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:32.570 18:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:32.570 18:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:32.570 18:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:32.570 18:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:32.570 18:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:32.570 18:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.570 18:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.570 18:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:32.570 18:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.570 18:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:32.570 "name": "Existed_Raid", 00:08:32.570 "uuid": "0bb92ec0-592a-4f66-b829-3e9187a4fb9e", 00:08:32.570 "strip_size_kb": 0, 00:08:32.570 "state": "online", 00:08:32.570 "raid_level": "raid1", 00:08:32.570 "superblock": false, 00:08:32.570 "num_base_bdevs": 3, 00:08:32.570 "num_base_bdevs_discovered": 3, 00:08:32.570 "num_base_bdevs_operational": 3, 00:08:32.570 "base_bdevs_list": [ 00:08:32.570 { 00:08:32.570 "name": "NewBaseBdev", 00:08:32.570 "uuid": "880b4ccf-b1c6-40f5-9190-3f109f2745c8", 00:08:32.570 "is_configured": true, 00:08:32.570 "data_offset": 0, 00:08:32.570 "data_size": 65536 00:08:32.570 }, 00:08:32.570 { 00:08:32.570 "name": "BaseBdev2", 00:08:32.570 "uuid": "0a61dbce-f3cf-4577-aef8-227730c5c812", 00:08:32.570 "is_configured": true, 00:08:32.570 "data_offset": 0, 00:08:32.570 "data_size": 65536 00:08:32.570 }, 00:08:32.570 { 00:08:32.570 "name": "BaseBdev3", 00:08:32.570 "uuid": "9038be3c-2fb5-4921-bf8e-1ef7d9583255", 00:08:32.570 "is_configured": true, 00:08:32.570 "data_offset": 0, 00:08:32.570 "data_size": 65536 00:08:32.570 } 00:08:32.570 ] 00:08:32.570 }' 00:08:32.570 18:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:32.570 18:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.830 18:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:32.830 18:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:32.830 18:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:32.830 18:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:32.830 18:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:32.830 18:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:32.830 18:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:32.830 18:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:32.830 18:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.830 18:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.830 [2024-12-15 18:39:33.234035] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:32.830 18:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.830 18:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:32.830 "name": "Existed_Raid", 00:08:32.830 "aliases": [ 00:08:32.830 "0bb92ec0-592a-4f66-b829-3e9187a4fb9e" 00:08:32.830 ], 00:08:32.830 "product_name": "Raid Volume", 00:08:32.830 "block_size": 512, 00:08:32.830 "num_blocks": 65536, 00:08:32.830 "uuid": "0bb92ec0-592a-4f66-b829-3e9187a4fb9e", 00:08:32.830 "assigned_rate_limits": { 00:08:32.830 "rw_ios_per_sec": 0, 00:08:32.830 "rw_mbytes_per_sec": 0, 00:08:32.830 "r_mbytes_per_sec": 0, 00:08:32.830 "w_mbytes_per_sec": 0 00:08:32.830 }, 00:08:32.830 "claimed": false, 00:08:32.830 "zoned": false, 00:08:32.830 "supported_io_types": { 00:08:32.830 "read": true, 00:08:32.830 "write": true, 00:08:32.830 "unmap": false, 00:08:32.830 "flush": false, 00:08:32.830 "reset": true, 00:08:32.830 "nvme_admin": false, 00:08:32.830 "nvme_io": false, 00:08:32.830 "nvme_io_md": false, 00:08:32.830 "write_zeroes": true, 00:08:32.830 "zcopy": false, 00:08:32.830 "get_zone_info": false, 00:08:32.830 "zone_management": false, 00:08:32.830 "zone_append": false, 00:08:32.830 "compare": false, 00:08:32.830 "compare_and_write": false, 00:08:32.830 "abort": false, 00:08:32.830 "seek_hole": false, 00:08:32.830 "seek_data": false, 00:08:32.830 "copy": false, 00:08:32.830 "nvme_iov_md": false 00:08:32.830 }, 00:08:32.830 "memory_domains": [ 00:08:32.830 { 00:08:32.830 "dma_device_id": "system", 00:08:32.830 "dma_device_type": 1 00:08:32.830 }, 00:08:32.830 { 00:08:32.830 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:32.830 "dma_device_type": 2 00:08:32.830 }, 00:08:32.830 { 00:08:32.830 "dma_device_id": "system", 00:08:32.830 "dma_device_type": 1 00:08:32.830 }, 00:08:32.830 { 00:08:32.830 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:32.830 "dma_device_type": 2 00:08:32.830 }, 00:08:32.830 { 00:08:32.830 "dma_device_id": "system", 00:08:32.830 "dma_device_type": 1 00:08:32.830 }, 00:08:32.830 { 00:08:32.830 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:32.830 "dma_device_type": 2 00:08:32.830 } 00:08:32.830 ], 00:08:32.830 "driver_specific": { 00:08:32.830 "raid": { 00:08:32.830 "uuid": "0bb92ec0-592a-4f66-b829-3e9187a4fb9e", 00:08:32.830 "strip_size_kb": 0, 00:08:32.830 "state": "online", 00:08:32.830 "raid_level": "raid1", 00:08:32.830 "superblock": false, 00:08:32.830 "num_base_bdevs": 3, 00:08:32.830 "num_base_bdevs_discovered": 3, 00:08:32.830 "num_base_bdevs_operational": 3, 00:08:32.830 "base_bdevs_list": [ 00:08:32.830 { 00:08:32.830 "name": "NewBaseBdev", 00:08:32.830 "uuid": "880b4ccf-b1c6-40f5-9190-3f109f2745c8", 00:08:32.830 "is_configured": true, 00:08:32.830 "data_offset": 0, 00:08:32.830 "data_size": 65536 00:08:32.830 }, 00:08:32.830 { 00:08:32.830 "name": "BaseBdev2", 00:08:32.830 "uuid": "0a61dbce-f3cf-4577-aef8-227730c5c812", 00:08:32.830 "is_configured": true, 00:08:32.830 "data_offset": 0, 00:08:32.830 "data_size": 65536 00:08:32.830 }, 00:08:32.830 { 00:08:32.830 "name": "BaseBdev3", 00:08:32.830 "uuid": "9038be3c-2fb5-4921-bf8e-1ef7d9583255", 00:08:32.830 "is_configured": true, 00:08:32.830 "data_offset": 0, 00:08:32.830 "data_size": 65536 00:08:32.830 } 00:08:32.830 ] 00:08:32.830 } 00:08:32.830 } 00:08:32.830 }' 00:08:33.090 18:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:33.090 18:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:33.090 BaseBdev2 00:08:33.090 BaseBdev3' 00:08:33.090 18:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:33.090 18:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:33.090 18:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:33.090 18:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:33.090 18:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.090 18:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.090 18:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:33.090 18:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.090 18:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:33.090 18:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:33.090 18:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:33.090 18:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:33.090 18:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.090 18:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.090 18:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:33.090 18:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.090 18:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:33.090 18:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:33.090 18:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:33.090 18:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:33.090 18:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.090 18:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.090 18:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:33.090 18:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.090 18:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:33.090 18:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:33.090 18:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:33.090 18:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.090 18:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.090 [2024-12-15 18:39:33.501183] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:33.090 [2024-12-15 18:39:33.501220] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:33.090 [2024-12-15 18:39:33.501296] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:33.090 [2024-12-15 18:39:33.501573] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:33.090 [2024-12-15 18:39:33.501592] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:08:33.090 18:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.090 18:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 80393 00:08:33.090 18:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 80393 ']' 00:08:33.090 18:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 80393 00:08:33.090 18:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:08:33.090 18:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:33.090 18:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80393 00:08:33.350 18:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:33.350 18:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:33.350 18:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80393' 00:08:33.350 killing process with pid 80393 00:08:33.350 18:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 80393 00:08:33.350 [2024-12-15 18:39:33.539219] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:33.350 18:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 80393 00:08:33.350 [2024-12-15 18:39:33.571575] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:33.609 18:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:33.609 00:08:33.609 real 0m8.599s 00:08:33.609 user 0m14.638s 00:08:33.609 sys 0m1.695s 00:08:33.609 18:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:33.609 18:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.609 ************************************ 00:08:33.609 END TEST raid_state_function_test 00:08:33.609 ************************************ 00:08:33.609 18:39:33 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:08:33.609 18:39:33 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:33.609 18:39:33 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:33.609 18:39:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:33.609 ************************************ 00:08:33.609 START TEST raid_state_function_test_sb 00:08:33.609 ************************************ 00:08:33.609 18:39:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 true 00:08:33.609 18:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:08:33.609 18:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:33.609 18:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:33.609 18:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:33.610 18:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:33.610 18:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:33.610 18:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:33.610 18:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:33.610 18:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:33.610 18:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:33.610 18:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:33.610 18:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:33.610 18:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:33.610 18:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:33.610 18:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:33.610 18:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:33.610 18:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:33.610 18:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:33.610 18:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:33.610 18:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:33.610 18:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:33.610 18:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:08:33.610 18:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:08:33.610 18:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:33.610 18:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:33.610 18:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=80998 00:08:33.610 18:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:33.610 Process raid pid: 80998 00:08:33.610 18:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80998' 00:08:33.610 18:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 80998 00:08:33.610 18:39:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 80998 ']' 00:08:33.610 18:39:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:33.610 18:39:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:33.610 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:33.610 18:39:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:33.610 18:39:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:33.610 18:39:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.610 [2024-12-15 18:39:33.951889] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:08:33.610 [2024-12-15 18:39:33.952038] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:33.869 [2024-12-15 18:39:34.124944] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:33.869 [2024-12-15 18:39:34.152246] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:33.869 [2024-12-15 18:39:34.196077] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:33.869 [2024-12-15 18:39:34.196123] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:34.438 18:39:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:34.438 18:39:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:08:34.438 18:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:34.438 18:39:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.438 18:39:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.438 [2024-12-15 18:39:34.803891] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:34.438 [2024-12-15 18:39:34.803949] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:34.438 [2024-12-15 18:39:34.803967] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:34.438 [2024-12-15 18:39:34.803979] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:34.438 [2024-12-15 18:39:34.803986] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:34.438 [2024-12-15 18:39:34.803997] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:34.438 18:39:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.438 18:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:34.438 18:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:34.438 18:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:34.438 18:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:34.438 18:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:34.438 18:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:34.438 18:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:34.438 18:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:34.438 18:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:34.438 18:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:34.438 18:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:34.438 18:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:34.438 18:39:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.438 18:39:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.438 18:39:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.438 18:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:34.438 "name": "Existed_Raid", 00:08:34.438 "uuid": "e21de58c-9210-4290-9015-f1541cef5ba6", 00:08:34.438 "strip_size_kb": 0, 00:08:34.438 "state": "configuring", 00:08:34.438 "raid_level": "raid1", 00:08:34.438 "superblock": true, 00:08:34.438 "num_base_bdevs": 3, 00:08:34.438 "num_base_bdevs_discovered": 0, 00:08:34.438 "num_base_bdevs_operational": 3, 00:08:34.438 "base_bdevs_list": [ 00:08:34.438 { 00:08:34.438 "name": "BaseBdev1", 00:08:34.438 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:34.438 "is_configured": false, 00:08:34.438 "data_offset": 0, 00:08:34.438 "data_size": 0 00:08:34.438 }, 00:08:34.438 { 00:08:34.438 "name": "BaseBdev2", 00:08:34.438 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:34.438 "is_configured": false, 00:08:34.438 "data_offset": 0, 00:08:34.438 "data_size": 0 00:08:34.438 }, 00:08:34.438 { 00:08:34.438 "name": "BaseBdev3", 00:08:34.438 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:34.438 "is_configured": false, 00:08:34.438 "data_offset": 0, 00:08:34.438 "data_size": 0 00:08:34.438 } 00:08:34.438 ] 00:08:34.438 }' 00:08:34.438 18:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:34.438 18:39:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.015 18:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:35.015 18:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.015 18:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.015 [2024-12-15 18:39:35.239003] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:35.015 [2024-12-15 18:39:35.239046] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:08:35.015 18:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.015 18:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:35.015 18:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.015 18:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.015 [2024-12-15 18:39:35.250974] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:35.015 [2024-12-15 18:39:35.251016] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:35.015 [2024-12-15 18:39:35.251025] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:35.015 [2024-12-15 18:39:35.251034] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:35.015 [2024-12-15 18:39:35.251040] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:35.015 [2024-12-15 18:39:35.251048] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:35.015 18:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.015 18:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:35.015 18:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.015 18:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.015 [2024-12-15 18:39:35.272167] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:35.015 BaseBdev1 00:08:35.015 18:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.015 18:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:35.015 18:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:35.015 18:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:35.015 18:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:35.015 18:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:35.015 18:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:35.015 18:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:35.015 18:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.015 18:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.015 18:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.015 18:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:35.015 18:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.015 18:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.015 [ 00:08:35.015 { 00:08:35.015 "name": "BaseBdev1", 00:08:35.015 "aliases": [ 00:08:35.015 "509b336c-0f9c-44c0-8425-16739ee31851" 00:08:35.015 ], 00:08:35.015 "product_name": "Malloc disk", 00:08:35.016 "block_size": 512, 00:08:35.016 "num_blocks": 65536, 00:08:35.016 "uuid": "509b336c-0f9c-44c0-8425-16739ee31851", 00:08:35.016 "assigned_rate_limits": { 00:08:35.016 "rw_ios_per_sec": 0, 00:08:35.016 "rw_mbytes_per_sec": 0, 00:08:35.016 "r_mbytes_per_sec": 0, 00:08:35.016 "w_mbytes_per_sec": 0 00:08:35.016 }, 00:08:35.016 "claimed": true, 00:08:35.016 "claim_type": "exclusive_write", 00:08:35.016 "zoned": false, 00:08:35.016 "supported_io_types": { 00:08:35.016 "read": true, 00:08:35.016 "write": true, 00:08:35.016 "unmap": true, 00:08:35.016 "flush": true, 00:08:35.016 "reset": true, 00:08:35.016 "nvme_admin": false, 00:08:35.016 "nvme_io": false, 00:08:35.016 "nvme_io_md": false, 00:08:35.016 "write_zeroes": true, 00:08:35.016 "zcopy": true, 00:08:35.016 "get_zone_info": false, 00:08:35.016 "zone_management": false, 00:08:35.016 "zone_append": false, 00:08:35.016 "compare": false, 00:08:35.016 "compare_and_write": false, 00:08:35.016 "abort": true, 00:08:35.016 "seek_hole": false, 00:08:35.016 "seek_data": false, 00:08:35.016 "copy": true, 00:08:35.016 "nvme_iov_md": false 00:08:35.016 }, 00:08:35.016 "memory_domains": [ 00:08:35.016 { 00:08:35.016 "dma_device_id": "system", 00:08:35.016 "dma_device_type": 1 00:08:35.016 }, 00:08:35.016 { 00:08:35.016 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:35.016 "dma_device_type": 2 00:08:35.016 } 00:08:35.016 ], 00:08:35.016 "driver_specific": {} 00:08:35.016 } 00:08:35.016 ] 00:08:35.016 18:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.016 18:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:35.016 18:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:35.016 18:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:35.016 18:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:35.016 18:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:35.016 18:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:35.016 18:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:35.016 18:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:35.016 18:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:35.016 18:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:35.016 18:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:35.016 18:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:35.016 18:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.016 18:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.016 18:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:35.016 18:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.016 18:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:35.016 "name": "Existed_Raid", 00:08:35.016 "uuid": "be6aafa7-2238-4b06-bada-64e6cd4d8e69", 00:08:35.016 "strip_size_kb": 0, 00:08:35.016 "state": "configuring", 00:08:35.016 "raid_level": "raid1", 00:08:35.016 "superblock": true, 00:08:35.016 "num_base_bdevs": 3, 00:08:35.016 "num_base_bdevs_discovered": 1, 00:08:35.016 "num_base_bdevs_operational": 3, 00:08:35.016 "base_bdevs_list": [ 00:08:35.016 { 00:08:35.016 "name": "BaseBdev1", 00:08:35.016 "uuid": "509b336c-0f9c-44c0-8425-16739ee31851", 00:08:35.016 "is_configured": true, 00:08:35.016 "data_offset": 2048, 00:08:35.016 "data_size": 63488 00:08:35.016 }, 00:08:35.016 { 00:08:35.016 "name": "BaseBdev2", 00:08:35.016 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:35.016 "is_configured": false, 00:08:35.016 "data_offset": 0, 00:08:35.016 "data_size": 0 00:08:35.016 }, 00:08:35.016 { 00:08:35.016 "name": "BaseBdev3", 00:08:35.016 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:35.016 "is_configured": false, 00:08:35.016 "data_offset": 0, 00:08:35.016 "data_size": 0 00:08:35.016 } 00:08:35.016 ] 00:08:35.016 }' 00:08:35.016 18:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:35.016 18:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.595 18:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:35.595 18:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.595 18:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.595 [2024-12-15 18:39:35.751464] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:35.595 [2024-12-15 18:39:35.751532] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:08:35.595 18:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.595 18:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:35.595 18:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.595 18:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.595 [2024-12-15 18:39:35.759499] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:35.595 [2024-12-15 18:39:35.761467] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:35.595 [2024-12-15 18:39:35.761514] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:35.595 [2024-12-15 18:39:35.761524] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:35.595 [2024-12-15 18:39:35.761534] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:35.595 18:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.595 18:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:35.595 18:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:35.595 18:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:35.595 18:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:35.595 18:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:35.595 18:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:35.595 18:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:35.595 18:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:35.595 18:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:35.595 18:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:35.595 18:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:35.595 18:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:35.595 18:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:35.595 18:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:35.595 18:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.596 18:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.596 18:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.596 18:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:35.596 "name": "Existed_Raid", 00:08:35.596 "uuid": "64a583ae-695a-4cf0-ada7-cc9f98d2b410", 00:08:35.596 "strip_size_kb": 0, 00:08:35.596 "state": "configuring", 00:08:35.596 "raid_level": "raid1", 00:08:35.596 "superblock": true, 00:08:35.596 "num_base_bdevs": 3, 00:08:35.596 "num_base_bdevs_discovered": 1, 00:08:35.596 "num_base_bdevs_operational": 3, 00:08:35.596 "base_bdevs_list": [ 00:08:35.596 { 00:08:35.596 "name": "BaseBdev1", 00:08:35.596 "uuid": "509b336c-0f9c-44c0-8425-16739ee31851", 00:08:35.596 "is_configured": true, 00:08:35.596 "data_offset": 2048, 00:08:35.596 "data_size": 63488 00:08:35.596 }, 00:08:35.596 { 00:08:35.596 "name": "BaseBdev2", 00:08:35.596 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:35.596 "is_configured": false, 00:08:35.596 "data_offset": 0, 00:08:35.596 "data_size": 0 00:08:35.596 }, 00:08:35.596 { 00:08:35.596 "name": "BaseBdev3", 00:08:35.596 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:35.596 "is_configured": false, 00:08:35.596 "data_offset": 0, 00:08:35.596 "data_size": 0 00:08:35.596 } 00:08:35.596 ] 00:08:35.596 }' 00:08:35.596 18:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:35.596 18:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.856 18:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:35.856 18:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.856 18:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.856 [2024-12-15 18:39:36.146085] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:35.856 BaseBdev2 00:08:35.856 18:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.856 18:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:35.856 18:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:35.856 18:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:35.856 18:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:35.856 18:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:35.856 18:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:35.856 18:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:35.856 18:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.856 18:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.856 18:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.856 18:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:35.856 18:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.856 18:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.856 [ 00:08:35.856 { 00:08:35.856 "name": "BaseBdev2", 00:08:35.856 "aliases": [ 00:08:35.856 "6a8c0ff6-c577-4d79-8553-3a6927afef77" 00:08:35.856 ], 00:08:35.856 "product_name": "Malloc disk", 00:08:35.856 "block_size": 512, 00:08:35.856 "num_blocks": 65536, 00:08:35.856 "uuid": "6a8c0ff6-c577-4d79-8553-3a6927afef77", 00:08:35.856 "assigned_rate_limits": { 00:08:35.856 "rw_ios_per_sec": 0, 00:08:35.856 "rw_mbytes_per_sec": 0, 00:08:35.856 "r_mbytes_per_sec": 0, 00:08:35.856 "w_mbytes_per_sec": 0 00:08:35.856 }, 00:08:35.856 "claimed": true, 00:08:35.856 "claim_type": "exclusive_write", 00:08:35.856 "zoned": false, 00:08:35.856 "supported_io_types": { 00:08:35.856 "read": true, 00:08:35.856 "write": true, 00:08:35.856 "unmap": true, 00:08:35.856 "flush": true, 00:08:35.856 "reset": true, 00:08:35.856 "nvme_admin": false, 00:08:35.856 "nvme_io": false, 00:08:35.856 "nvme_io_md": false, 00:08:35.856 "write_zeroes": true, 00:08:35.856 "zcopy": true, 00:08:35.856 "get_zone_info": false, 00:08:35.856 "zone_management": false, 00:08:35.856 "zone_append": false, 00:08:35.856 "compare": false, 00:08:35.856 "compare_and_write": false, 00:08:35.856 "abort": true, 00:08:35.856 "seek_hole": false, 00:08:35.856 "seek_data": false, 00:08:35.856 "copy": true, 00:08:35.856 "nvme_iov_md": false 00:08:35.856 }, 00:08:35.856 "memory_domains": [ 00:08:35.856 { 00:08:35.856 "dma_device_id": "system", 00:08:35.856 "dma_device_type": 1 00:08:35.856 }, 00:08:35.856 { 00:08:35.856 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:35.856 "dma_device_type": 2 00:08:35.856 } 00:08:35.856 ], 00:08:35.856 "driver_specific": {} 00:08:35.856 } 00:08:35.856 ] 00:08:35.856 18:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.856 18:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:35.856 18:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:35.856 18:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:35.856 18:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:35.856 18:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:35.856 18:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:35.856 18:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:35.856 18:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:35.856 18:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:35.856 18:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:35.856 18:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:35.856 18:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:35.856 18:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:35.856 18:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:35.856 18:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:35.856 18:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.856 18:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.856 18:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.856 18:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:35.856 "name": "Existed_Raid", 00:08:35.856 "uuid": "64a583ae-695a-4cf0-ada7-cc9f98d2b410", 00:08:35.856 "strip_size_kb": 0, 00:08:35.856 "state": "configuring", 00:08:35.856 "raid_level": "raid1", 00:08:35.856 "superblock": true, 00:08:35.856 "num_base_bdevs": 3, 00:08:35.856 "num_base_bdevs_discovered": 2, 00:08:35.856 "num_base_bdevs_operational": 3, 00:08:35.856 "base_bdevs_list": [ 00:08:35.856 { 00:08:35.856 "name": "BaseBdev1", 00:08:35.856 "uuid": "509b336c-0f9c-44c0-8425-16739ee31851", 00:08:35.856 "is_configured": true, 00:08:35.856 "data_offset": 2048, 00:08:35.856 "data_size": 63488 00:08:35.856 }, 00:08:35.856 { 00:08:35.856 "name": "BaseBdev2", 00:08:35.856 "uuid": "6a8c0ff6-c577-4d79-8553-3a6927afef77", 00:08:35.856 "is_configured": true, 00:08:35.856 "data_offset": 2048, 00:08:35.856 "data_size": 63488 00:08:35.856 }, 00:08:35.856 { 00:08:35.856 "name": "BaseBdev3", 00:08:35.856 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:35.856 "is_configured": false, 00:08:35.856 "data_offset": 0, 00:08:35.856 "data_size": 0 00:08:35.856 } 00:08:35.856 ] 00:08:35.856 }' 00:08:35.856 18:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:35.856 18:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.426 18:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:36.426 18:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.426 18:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.426 [2024-12-15 18:39:36.577936] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:36.426 [2024-12-15 18:39:36.578174] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:08:36.426 [2024-12-15 18:39:36.578197] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:36.426 [2024-12-15 18:39:36.578555] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:08:36.426 BaseBdev3 00:08:36.426 [2024-12-15 18:39:36.578761] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:08:36.426 [2024-12-15 18:39:36.578783] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:08:36.426 [2024-12-15 18:39:36.578959] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:36.426 18:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.427 18:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:36.427 18:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:36.427 18:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:36.427 18:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:36.427 18:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:36.427 18:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:36.427 18:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:36.427 18:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.427 18:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.427 18:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.427 18:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:36.427 18:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.427 18:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.427 [ 00:08:36.427 { 00:08:36.427 "name": "BaseBdev3", 00:08:36.427 "aliases": [ 00:08:36.427 "a33b2f97-895b-4ad5-bf2b-d8266c117c34" 00:08:36.427 ], 00:08:36.427 "product_name": "Malloc disk", 00:08:36.427 "block_size": 512, 00:08:36.427 "num_blocks": 65536, 00:08:36.427 "uuid": "a33b2f97-895b-4ad5-bf2b-d8266c117c34", 00:08:36.427 "assigned_rate_limits": { 00:08:36.427 "rw_ios_per_sec": 0, 00:08:36.427 "rw_mbytes_per_sec": 0, 00:08:36.427 "r_mbytes_per_sec": 0, 00:08:36.427 "w_mbytes_per_sec": 0 00:08:36.427 }, 00:08:36.427 "claimed": true, 00:08:36.427 "claim_type": "exclusive_write", 00:08:36.427 "zoned": false, 00:08:36.427 "supported_io_types": { 00:08:36.427 "read": true, 00:08:36.427 "write": true, 00:08:36.427 "unmap": true, 00:08:36.427 "flush": true, 00:08:36.427 "reset": true, 00:08:36.427 "nvme_admin": false, 00:08:36.427 "nvme_io": false, 00:08:36.427 "nvme_io_md": false, 00:08:36.427 "write_zeroes": true, 00:08:36.427 "zcopy": true, 00:08:36.427 "get_zone_info": false, 00:08:36.427 "zone_management": false, 00:08:36.427 "zone_append": false, 00:08:36.427 "compare": false, 00:08:36.427 "compare_and_write": false, 00:08:36.427 "abort": true, 00:08:36.427 "seek_hole": false, 00:08:36.427 "seek_data": false, 00:08:36.427 "copy": true, 00:08:36.427 "nvme_iov_md": false 00:08:36.427 }, 00:08:36.427 "memory_domains": [ 00:08:36.427 { 00:08:36.427 "dma_device_id": "system", 00:08:36.427 "dma_device_type": 1 00:08:36.427 }, 00:08:36.427 { 00:08:36.427 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:36.427 "dma_device_type": 2 00:08:36.427 } 00:08:36.427 ], 00:08:36.427 "driver_specific": {} 00:08:36.427 } 00:08:36.427 ] 00:08:36.427 18:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.427 18:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:36.427 18:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:36.427 18:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:36.427 18:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:08:36.427 18:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:36.427 18:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:36.427 18:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:36.427 18:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:36.427 18:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:36.427 18:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:36.427 18:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:36.427 18:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:36.427 18:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:36.427 18:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:36.427 18:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:36.427 18:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.427 18:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.427 18:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.427 18:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:36.427 "name": "Existed_Raid", 00:08:36.427 "uuid": "64a583ae-695a-4cf0-ada7-cc9f98d2b410", 00:08:36.427 "strip_size_kb": 0, 00:08:36.427 "state": "online", 00:08:36.427 "raid_level": "raid1", 00:08:36.427 "superblock": true, 00:08:36.427 "num_base_bdevs": 3, 00:08:36.427 "num_base_bdevs_discovered": 3, 00:08:36.427 "num_base_bdevs_operational": 3, 00:08:36.427 "base_bdevs_list": [ 00:08:36.427 { 00:08:36.427 "name": "BaseBdev1", 00:08:36.427 "uuid": "509b336c-0f9c-44c0-8425-16739ee31851", 00:08:36.427 "is_configured": true, 00:08:36.427 "data_offset": 2048, 00:08:36.427 "data_size": 63488 00:08:36.427 }, 00:08:36.427 { 00:08:36.427 "name": "BaseBdev2", 00:08:36.427 "uuid": "6a8c0ff6-c577-4d79-8553-3a6927afef77", 00:08:36.427 "is_configured": true, 00:08:36.427 "data_offset": 2048, 00:08:36.427 "data_size": 63488 00:08:36.427 }, 00:08:36.427 { 00:08:36.427 "name": "BaseBdev3", 00:08:36.427 "uuid": "a33b2f97-895b-4ad5-bf2b-d8266c117c34", 00:08:36.427 "is_configured": true, 00:08:36.427 "data_offset": 2048, 00:08:36.427 "data_size": 63488 00:08:36.427 } 00:08:36.427 ] 00:08:36.427 }' 00:08:36.427 18:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:36.427 18:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.686 18:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:36.686 18:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:36.686 18:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:36.686 18:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:36.686 18:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:36.686 18:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:36.686 18:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:36.686 18:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:36.686 18:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.686 18:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.686 [2024-12-15 18:39:37.029516] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:36.686 18:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.686 18:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:36.686 "name": "Existed_Raid", 00:08:36.686 "aliases": [ 00:08:36.686 "64a583ae-695a-4cf0-ada7-cc9f98d2b410" 00:08:36.686 ], 00:08:36.686 "product_name": "Raid Volume", 00:08:36.686 "block_size": 512, 00:08:36.686 "num_blocks": 63488, 00:08:36.686 "uuid": "64a583ae-695a-4cf0-ada7-cc9f98d2b410", 00:08:36.686 "assigned_rate_limits": { 00:08:36.686 "rw_ios_per_sec": 0, 00:08:36.686 "rw_mbytes_per_sec": 0, 00:08:36.686 "r_mbytes_per_sec": 0, 00:08:36.686 "w_mbytes_per_sec": 0 00:08:36.686 }, 00:08:36.686 "claimed": false, 00:08:36.686 "zoned": false, 00:08:36.686 "supported_io_types": { 00:08:36.686 "read": true, 00:08:36.686 "write": true, 00:08:36.686 "unmap": false, 00:08:36.686 "flush": false, 00:08:36.686 "reset": true, 00:08:36.686 "nvme_admin": false, 00:08:36.686 "nvme_io": false, 00:08:36.686 "nvme_io_md": false, 00:08:36.686 "write_zeroes": true, 00:08:36.686 "zcopy": false, 00:08:36.686 "get_zone_info": false, 00:08:36.686 "zone_management": false, 00:08:36.686 "zone_append": false, 00:08:36.686 "compare": false, 00:08:36.686 "compare_and_write": false, 00:08:36.686 "abort": false, 00:08:36.686 "seek_hole": false, 00:08:36.687 "seek_data": false, 00:08:36.687 "copy": false, 00:08:36.687 "nvme_iov_md": false 00:08:36.687 }, 00:08:36.687 "memory_domains": [ 00:08:36.687 { 00:08:36.687 "dma_device_id": "system", 00:08:36.687 "dma_device_type": 1 00:08:36.687 }, 00:08:36.687 { 00:08:36.687 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:36.687 "dma_device_type": 2 00:08:36.687 }, 00:08:36.687 { 00:08:36.687 "dma_device_id": "system", 00:08:36.687 "dma_device_type": 1 00:08:36.687 }, 00:08:36.687 { 00:08:36.687 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:36.687 "dma_device_type": 2 00:08:36.687 }, 00:08:36.687 { 00:08:36.687 "dma_device_id": "system", 00:08:36.687 "dma_device_type": 1 00:08:36.687 }, 00:08:36.687 { 00:08:36.687 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:36.687 "dma_device_type": 2 00:08:36.687 } 00:08:36.687 ], 00:08:36.687 "driver_specific": { 00:08:36.687 "raid": { 00:08:36.687 "uuid": "64a583ae-695a-4cf0-ada7-cc9f98d2b410", 00:08:36.687 "strip_size_kb": 0, 00:08:36.687 "state": "online", 00:08:36.687 "raid_level": "raid1", 00:08:36.687 "superblock": true, 00:08:36.687 "num_base_bdevs": 3, 00:08:36.687 "num_base_bdevs_discovered": 3, 00:08:36.687 "num_base_bdevs_operational": 3, 00:08:36.687 "base_bdevs_list": [ 00:08:36.687 { 00:08:36.687 "name": "BaseBdev1", 00:08:36.687 "uuid": "509b336c-0f9c-44c0-8425-16739ee31851", 00:08:36.687 "is_configured": true, 00:08:36.687 "data_offset": 2048, 00:08:36.687 "data_size": 63488 00:08:36.687 }, 00:08:36.687 { 00:08:36.687 "name": "BaseBdev2", 00:08:36.687 "uuid": "6a8c0ff6-c577-4d79-8553-3a6927afef77", 00:08:36.687 "is_configured": true, 00:08:36.687 "data_offset": 2048, 00:08:36.687 "data_size": 63488 00:08:36.687 }, 00:08:36.687 { 00:08:36.687 "name": "BaseBdev3", 00:08:36.687 "uuid": "a33b2f97-895b-4ad5-bf2b-d8266c117c34", 00:08:36.687 "is_configured": true, 00:08:36.687 "data_offset": 2048, 00:08:36.687 "data_size": 63488 00:08:36.687 } 00:08:36.687 ] 00:08:36.687 } 00:08:36.687 } 00:08:36.687 }' 00:08:36.687 18:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:36.687 18:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:36.687 BaseBdev2 00:08:36.687 BaseBdev3' 00:08:36.687 18:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:36.687 18:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:36.687 18:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:36.687 18:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:36.687 18:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.687 18:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.687 18:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:36.946 18:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.946 18:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:36.946 18:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:36.946 18:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:36.946 18:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:36.946 18:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.946 18:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.946 18:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:36.946 18:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.946 18:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:36.946 18:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:36.946 18:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:36.946 18:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:36.946 18:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.946 18:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.946 18:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:36.946 18:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.946 18:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:36.946 18:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:36.946 18:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:36.946 18:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.946 18:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.946 [2024-12-15 18:39:37.236912] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:36.946 18:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.946 18:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:36.946 18:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:08:36.946 18:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:36.946 18:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:08:36.946 18:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:08:36.946 18:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:08:36.946 18:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:36.946 18:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:36.946 18:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:36.946 18:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:36.946 18:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:36.946 18:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:36.946 18:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:36.946 18:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:36.946 18:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:36.947 18:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:36.947 18:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:36.947 18:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.947 18:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.947 18:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.947 18:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:36.947 "name": "Existed_Raid", 00:08:36.947 "uuid": "64a583ae-695a-4cf0-ada7-cc9f98d2b410", 00:08:36.947 "strip_size_kb": 0, 00:08:36.947 "state": "online", 00:08:36.947 "raid_level": "raid1", 00:08:36.947 "superblock": true, 00:08:36.947 "num_base_bdevs": 3, 00:08:36.947 "num_base_bdevs_discovered": 2, 00:08:36.947 "num_base_bdevs_operational": 2, 00:08:36.947 "base_bdevs_list": [ 00:08:36.947 { 00:08:36.947 "name": null, 00:08:36.947 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:36.947 "is_configured": false, 00:08:36.947 "data_offset": 0, 00:08:36.947 "data_size": 63488 00:08:36.947 }, 00:08:36.947 { 00:08:36.947 "name": "BaseBdev2", 00:08:36.947 "uuid": "6a8c0ff6-c577-4d79-8553-3a6927afef77", 00:08:36.947 "is_configured": true, 00:08:36.947 "data_offset": 2048, 00:08:36.947 "data_size": 63488 00:08:36.947 }, 00:08:36.947 { 00:08:36.947 "name": "BaseBdev3", 00:08:36.947 "uuid": "a33b2f97-895b-4ad5-bf2b-d8266c117c34", 00:08:36.947 "is_configured": true, 00:08:36.947 "data_offset": 2048, 00:08:36.947 "data_size": 63488 00:08:36.947 } 00:08:36.947 ] 00:08:36.947 }' 00:08:36.947 18:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:36.947 18:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.515 18:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:37.515 18:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:37.515 18:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:37.515 18:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:37.515 18:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.515 18:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.515 18:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.515 18:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:37.515 18:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:37.515 18:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:37.515 18:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.515 18:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.515 [2024-12-15 18:39:37.707626] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:37.515 18:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.515 18:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:37.515 18:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:37.515 18:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:37.515 18:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:37.515 18:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.515 18:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.515 18:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.515 18:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:37.515 18:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:37.515 18:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:37.515 18:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.515 18:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.515 [2024-12-15 18:39:37.775018] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:37.515 [2024-12-15 18:39:37.775163] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:37.515 [2024-12-15 18:39:37.786853] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:37.515 [2024-12-15 18:39:37.786905] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:37.515 [2024-12-15 18:39:37.786919] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:08:37.515 18:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.515 18:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:37.515 18:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:37.515 18:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:37.515 18:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:37.515 18:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.516 18:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.516 18:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.516 18:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:37.516 18:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:37.516 18:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:37.516 18:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:37.516 18:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:37.516 18:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:37.516 18:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.516 18:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.516 BaseBdev2 00:08:37.516 18:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.516 18:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:37.516 18:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:37.516 18:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:37.516 18:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:37.516 18:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:37.516 18:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:37.516 18:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:37.516 18:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.516 18:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.516 18:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.516 18:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:37.516 18:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.516 18:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.516 [ 00:08:37.516 { 00:08:37.516 "name": "BaseBdev2", 00:08:37.516 "aliases": [ 00:08:37.516 "22105ead-faec-457d-8b87-0ca6470ff574" 00:08:37.516 ], 00:08:37.516 "product_name": "Malloc disk", 00:08:37.516 "block_size": 512, 00:08:37.516 "num_blocks": 65536, 00:08:37.516 "uuid": "22105ead-faec-457d-8b87-0ca6470ff574", 00:08:37.516 "assigned_rate_limits": { 00:08:37.516 "rw_ios_per_sec": 0, 00:08:37.516 "rw_mbytes_per_sec": 0, 00:08:37.516 "r_mbytes_per_sec": 0, 00:08:37.516 "w_mbytes_per_sec": 0 00:08:37.516 }, 00:08:37.516 "claimed": false, 00:08:37.516 "zoned": false, 00:08:37.516 "supported_io_types": { 00:08:37.516 "read": true, 00:08:37.516 "write": true, 00:08:37.516 "unmap": true, 00:08:37.516 "flush": true, 00:08:37.516 "reset": true, 00:08:37.516 "nvme_admin": false, 00:08:37.516 "nvme_io": false, 00:08:37.516 "nvme_io_md": false, 00:08:37.516 "write_zeroes": true, 00:08:37.516 "zcopy": true, 00:08:37.516 "get_zone_info": false, 00:08:37.516 "zone_management": false, 00:08:37.516 "zone_append": false, 00:08:37.516 "compare": false, 00:08:37.516 "compare_and_write": false, 00:08:37.516 "abort": true, 00:08:37.516 "seek_hole": false, 00:08:37.516 "seek_data": false, 00:08:37.516 "copy": true, 00:08:37.516 "nvme_iov_md": false 00:08:37.516 }, 00:08:37.516 "memory_domains": [ 00:08:37.516 { 00:08:37.516 "dma_device_id": "system", 00:08:37.516 "dma_device_type": 1 00:08:37.516 }, 00:08:37.516 { 00:08:37.516 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:37.516 "dma_device_type": 2 00:08:37.516 } 00:08:37.516 ], 00:08:37.516 "driver_specific": {} 00:08:37.516 } 00:08:37.516 ] 00:08:37.516 18:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.516 18:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:37.516 18:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:37.516 18:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:37.516 18:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:37.516 18:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.516 18:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.516 BaseBdev3 00:08:37.516 18:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.516 18:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:37.516 18:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:37.516 18:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:37.516 18:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:37.516 18:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:37.516 18:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:37.516 18:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:37.516 18:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.516 18:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.516 18:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.516 18:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:37.516 18:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.516 18:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.516 [ 00:08:37.516 { 00:08:37.516 "name": "BaseBdev3", 00:08:37.516 "aliases": [ 00:08:37.516 "4ca447ae-f5b8-44e2-8a0c-e2dc1429c362" 00:08:37.516 ], 00:08:37.516 "product_name": "Malloc disk", 00:08:37.516 "block_size": 512, 00:08:37.516 "num_blocks": 65536, 00:08:37.516 "uuid": "4ca447ae-f5b8-44e2-8a0c-e2dc1429c362", 00:08:37.516 "assigned_rate_limits": { 00:08:37.516 "rw_ios_per_sec": 0, 00:08:37.516 "rw_mbytes_per_sec": 0, 00:08:37.516 "r_mbytes_per_sec": 0, 00:08:37.516 "w_mbytes_per_sec": 0 00:08:37.516 }, 00:08:37.516 "claimed": false, 00:08:37.516 "zoned": false, 00:08:37.516 "supported_io_types": { 00:08:37.516 "read": true, 00:08:37.516 "write": true, 00:08:37.516 "unmap": true, 00:08:37.516 "flush": true, 00:08:37.516 "reset": true, 00:08:37.516 "nvme_admin": false, 00:08:37.516 "nvme_io": false, 00:08:37.516 "nvme_io_md": false, 00:08:37.516 "write_zeroes": true, 00:08:37.516 "zcopy": true, 00:08:37.516 "get_zone_info": false, 00:08:37.516 "zone_management": false, 00:08:37.516 "zone_append": false, 00:08:37.516 "compare": false, 00:08:37.516 "compare_and_write": false, 00:08:37.516 "abort": true, 00:08:37.516 "seek_hole": false, 00:08:37.516 "seek_data": false, 00:08:37.516 "copy": true, 00:08:37.516 "nvme_iov_md": false 00:08:37.516 }, 00:08:37.516 "memory_domains": [ 00:08:37.516 { 00:08:37.516 "dma_device_id": "system", 00:08:37.516 "dma_device_type": 1 00:08:37.516 }, 00:08:37.516 { 00:08:37.516 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:37.516 "dma_device_type": 2 00:08:37.516 } 00:08:37.516 ], 00:08:37.516 "driver_specific": {} 00:08:37.516 } 00:08:37.516 ] 00:08:37.516 18:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.516 18:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:37.516 18:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:37.516 18:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:37.516 18:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:37.516 18:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.516 18:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.516 [2024-12-15 18:39:37.952536] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:37.516 [2024-12-15 18:39:37.952585] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:37.516 [2024-12-15 18:39:37.952605] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:37.776 [2024-12-15 18:39:37.954620] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:37.776 18:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.776 18:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:37.776 18:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:37.776 18:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:37.776 18:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:37.776 18:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:37.776 18:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:37.776 18:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:37.776 18:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:37.776 18:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:37.776 18:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:37.776 18:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:37.776 18:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:37.776 18:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.776 18:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.776 18:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.776 18:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:37.776 "name": "Existed_Raid", 00:08:37.776 "uuid": "14a01bae-3401-4f28-ba95-1aa686442412", 00:08:37.776 "strip_size_kb": 0, 00:08:37.776 "state": "configuring", 00:08:37.776 "raid_level": "raid1", 00:08:37.776 "superblock": true, 00:08:37.776 "num_base_bdevs": 3, 00:08:37.776 "num_base_bdevs_discovered": 2, 00:08:37.776 "num_base_bdevs_operational": 3, 00:08:37.776 "base_bdevs_list": [ 00:08:37.776 { 00:08:37.776 "name": "BaseBdev1", 00:08:37.776 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:37.776 "is_configured": false, 00:08:37.776 "data_offset": 0, 00:08:37.776 "data_size": 0 00:08:37.776 }, 00:08:37.776 { 00:08:37.776 "name": "BaseBdev2", 00:08:37.776 "uuid": "22105ead-faec-457d-8b87-0ca6470ff574", 00:08:37.776 "is_configured": true, 00:08:37.776 "data_offset": 2048, 00:08:37.776 "data_size": 63488 00:08:37.776 }, 00:08:37.776 { 00:08:37.776 "name": "BaseBdev3", 00:08:37.776 "uuid": "4ca447ae-f5b8-44e2-8a0c-e2dc1429c362", 00:08:37.776 "is_configured": true, 00:08:37.776 "data_offset": 2048, 00:08:37.776 "data_size": 63488 00:08:37.776 } 00:08:37.776 ] 00:08:37.776 }' 00:08:37.776 18:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:37.776 18:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.035 18:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:38.035 18:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.035 18:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.035 [2024-12-15 18:39:38.331936] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:38.035 18:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.035 18:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:38.035 18:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:38.035 18:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:38.035 18:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:38.035 18:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:38.035 18:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:38.035 18:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:38.035 18:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:38.035 18:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:38.035 18:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:38.035 18:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:38.035 18:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.035 18:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.035 18:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:38.035 18:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.036 18:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:38.036 "name": "Existed_Raid", 00:08:38.036 "uuid": "14a01bae-3401-4f28-ba95-1aa686442412", 00:08:38.036 "strip_size_kb": 0, 00:08:38.036 "state": "configuring", 00:08:38.036 "raid_level": "raid1", 00:08:38.036 "superblock": true, 00:08:38.036 "num_base_bdevs": 3, 00:08:38.036 "num_base_bdevs_discovered": 1, 00:08:38.036 "num_base_bdevs_operational": 3, 00:08:38.036 "base_bdevs_list": [ 00:08:38.036 { 00:08:38.036 "name": "BaseBdev1", 00:08:38.036 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:38.036 "is_configured": false, 00:08:38.036 "data_offset": 0, 00:08:38.036 "data_size": 0 00:08:38.036 }, 00:08:38.036 { 00:08:38.036 "name": null, 00:08:38.036 "uuid": "22105ead-faec-457d-8b87-0ca6470ff574", 00:08:38.036 "is_configured": false, 00:08:38.036 "data_offset": 0, 00:08:38.036 "data_size": 63488 00:08:38.036 }, 00:08:38.036 { 00:08:38.036 "name": "BaseBdev3", 00:08:38.036 "uuid": "4ca447ae-f5b8-44e2-8a0c-e2dc1429c362", 00:08:38.036 "is_configured": true, 00:08:38.036 "data_offset": 2048, 00:08:38.036 "data_size": 63488 00:08:38.036 } 00:08:38.036 ] 00:08:38.036 }' 00:08:38.036 18:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:38.036 18:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.295 18:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:38.295 18:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:38.295 18:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.295 18:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.554 18:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.554 18:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:38.554 18:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:38.554 18:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.554 18:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.554 BaseBdev1 00:08:38.554 [2024-12-15 18:39:38.790171] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:38.554 18:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.554 18:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:38.554 18:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:38.554 18:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:38.554 18:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:38.554 18:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:38.554 18:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:38.554 18:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:38.554 18:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.554 18:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.554 18:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.554 18:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:38.554 18:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.554 18:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.554 [ 00:08:38.554 { 00:08:38.554 "name": "BaseBdev1", 00:08:38.554 "aliases": [ 00:08:38.554 "d96922a4-a087-4890-809a-33b916652cda" 00:08:38.554 ], 00:08:38.554 "product_name": "Malloc disk", 00:08:38.554 "block_size": 512, 00:08:38.554 "num_blocks": 65536, 00:08:38.554 "uuid": "d96922a4-a087-4890-809a-33b916652cda", 00:08:38.554 "assigned_rate_limits": { 00:08:38.554 "rw_ios_per_sec": 0, 00:08:38.554 "rw_mbytes_per_sec": 0, 00:08:38.554 "r_mbytes_per_sec": 0, 00:08:38.554 "w_mbytes_per_sec": 0 00:08:38.554 }, 00:08:38.554 "claimed": true, 00:08:38.554 "claim_type": "exclusive_write", 00:08:38.554 "zoned": false, 00:08:38.554 "supported_io_types": { 00:08:38.554 "read": true, 00:08:38.554 "write": true, 00:08:38.554 "unmap": true, 00:08:38.554 "flush": true, 00:08:38.554 "reset": true, 00:08:38.554 "nvme_admin": false, 00:08:38.554 "nvme_io": false, 00:08:38.554 "nvme_io_md": false, 00:08:38.554 "write_zeroes": true, 00:08:38.554 "zcopy": true, 00:08:38.554 "get_zone_info": false, 00:08:38.554 "zone_management": false, 00:08:38.554 "zone_append": false, 00:08:38.554 "compare": false, 00:08:38.554 "compare_and_write": false, 00:08:38.554 "abort": true, 00:08:38.554 "seek_hole": false, 00:08:38.554 "seek_data": false, 00:08:38.554 "copy": true, 00:08:38.554 "nvme_iov_md": false 00:08:38.554 }, 00:08:38.554 "memory_domains": [ 00:08:38.554 { 00:08:38.554 "dma_device_id": "system", 00:08:38.554 "dma_device_type": 1 00:08:38.554 }, 00:08:38.554 { 00:08:38.554 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:38.554 "dma_device_type": 2 00:08:38.554 } 00:08:38.554 ], 00:08:38.554 "driver_specific": {} 00:08:38.554 } 00:08:38.554 ] 00:08:38.554 18:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.554 18:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:38.555 18:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:38.555 18:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:38.555 18:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:38.555 18:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:38.555 18:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:38.555 18:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:38.555 18:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:38.555 18:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:38.555 18:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:38.555 18:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:38.555 18:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:38.555 18:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:38.555 18:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.555 18:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.555 18:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.555 18:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:38.555 "name": "Existed_Raid", 00:08:38.555 "uuid": "14a01bae-3401-4f28-ba95-1aa686442412", 00:08:38.555 "strip_size_kb": 0, 00:08:38.555 "state": "configuring", 00:08:38.555 "raid_level": "raid1", 00:08:38.555 "superblock": true, 00:08:38.555 "num_base_bdevs": 3, 00:08:38.555 "num_base_bdevs_discovered": 2, 00:08:38.555 "num_base_bdevs_operational": 3, 00:08:38.555 "base_bdevs_list": [ 00:08:38.555 { 00:08:38.555 "name": "BaseBdev1", 00:08:38.555 "uuid": "d96922a4-a087-4890-809a-33b916652cda", 00:08:38.555 "is_configured": true, 00:08:38.555 "data_offset": 2048, 00:08:38.555 "data_size": 63488 00:08:38.555 }, 00:08:38.555 { 00:08:38.555 "name": null, 00:08:38.555 "uuid": "22105ead-faec-457d-8b87-0ca6470ff574", 00:08:38.555 "is_configured": false, 00:08:38.555 "data_offset": 0, 00:08:38.555 "data_size": 63488 00:08:38.555 }, 00:08:38.555 { 00:08:38.555 "name": "BaseBdev3", 00:08:38.555 "uuid": "4ca447ae-f5b8-44e2-8a0c-e2dc1429c362", 00:08:38.555 "is_configured": true, 00:08:38.555 "data_offset": 2048, 00:08:38.555 "data_size": 63488 00:08:38.555 } 00:08:38.555 ] 00:08:38.555 }' 00:08:38.555 18:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:38.555 18:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.814 18:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:38.814 18:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:38.814 18:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.814 18:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.073 18:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.073 18:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:39.073 18:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:39.073 18:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.073 18:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.073 [2024-12-15 18:39:39.273517] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:39.073 18:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.073 18:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:39.073 18:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:39.073 18:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:39.073 18:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:39.073 18:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:39.073 18:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:39.073 18:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:39.073 18:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:39.073 18:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:39.073 18:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:39.073 18:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:39.073 18:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:39.073 18:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.073 18:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.073 18:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.073 18:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:39.073 "name": "Existed_Raid", 00:08:39.073 "uuid": "14a01bae-3401-4f28-ba95-1aa686442412", 00:08:39.073 "strip_size_kb": 0, 00:08:39.073 "state": "configuring", 00:08:39.073 "raid_level": "raid1", 00:08:39.073 "superblock": true, 00:08:39.073 "num_base_bdevs": 3, 00:08:39.073 "num_base_bdevs_discovered": 1, 00:08:39.073 "num_base_bdevs_operational": 3, 00:08:39.073 "base_bdevs_list": [ 00:08:39.073 { 00:08:39.073 "name": "BaseBdev1", 00:08:39.073 "uuid": "d96922a4-a087-4890-809a-33b916652cda", 00:08:39.073 "is_configured": true, 00:08:39.073 "data_offset": 2048, 00:08:39.073 "data_size": 63488 00:08:39.073 }, 00:08:39.073 { 00:08:39.073 "name": null, 00:08:39.073 "uuid": "22105ead-faec-457d-8b87-0ca6470ff574", 00:08:39.073 "is_configured": false, 00:08:39.073 "data_offset": 0, 00:08:39.073 "data_size": 63488 00:08:39.073 }, 00:08:39.073 { 00:08:39.073 "name": null, 00:08:39.073 "uuid": "4ca447ae-f5b8-44e2-8a0c-e2dc1429c362", 00:08:39.073 "is_configured": false, 00:08:39.073 "data_offset": 0, 00:08:39.073 "data_size": 63488 00:08:39.073 } 00:08:39.073 ] 00:08:39.073 }' 00:08:39.073 18:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:39.073 18:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.332 18:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:39.332 18:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.332 18:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.332 18:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:39.332 18:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.332 18:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:39.332 18:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:39.332 18:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.332 18:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.332 [2024-12-15 18:39:39.700832] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:39.332 18:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.332 18:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:39.332 18:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:39.332 18:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:39.332 18:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:39.332 18:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:39.332 18:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:39.332 18:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:39.332 18:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:39.332 18:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:39.332 18:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:39.332 18:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:39.332 18:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:39.332 18:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.332 18:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.332 18:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.332 18:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:39.332 "name": "Existed_Raid", 00:08:39.332 "uuid": "14a01bae-3401-4f28-ba95-1aa686442412", 00:08:39.332 "strip_size_kb": 0, 00:08:39.332 "state": "configuring", 00:08:39.332 "raid_level": "raid1", 00:08:39.332 "superblock": true, 00:08:39.332 "num_base_bdevs": 3, 00:08:39.332 "num_base_bdevs_discovered": 2, 00:08:39.332 "num_base_bdevs_operational": 3, 00:08:39.332 "base_bdevs_list": [ 00:08:39.332 { 00:08:39.332 "name": "BaseBdev1", 00:08:39.332 "uuid": "d96922a4-a087-4890-809a-33b916652cda", 00:08:39.332 "is_configured": true, 00:08:39.332 "data_offset": 2048, 00:08:39.332 "data_size": 63488 00:08:39.332 }, 00:08:39.332 { 00:08:39.332 "name": null, 00:08:39.333 "uuid": "22105ead-faec-457d-8b87-0ca6470ff574", 00:08:39.333 "is_configured": false, 00:08:39.333 "data_offset": 0, 00:08:39.333 "data_size": 63488 00:08:39.333 }, 00:08:39.333 { 00:08:39.333 "name": "BaseBdev3", 00:08:39.333 "uuid": "4ca447ae-f5b8-44e2-8a0c-e2dc1429c362", 00:08:39.333 "is_configured": true, 00:08:39.333 "data_offset": 2048, 00:08:39.333 "data_size": 63488 00:08:39.333 } 00:08:39.333 ] 00:08:39.333 }' 00:08:39.333 18:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:39.333 18:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.901 18:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:39.901 18:39:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.901 18:39:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.901 18:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:39.901 18:39:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.901 18:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:39.901 18:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:39.901 18:39:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.901 18:39:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.901 [2024-12-15 18:39:40.148169] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:39.901 18:39:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.901 18:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:39.901 18:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:39.901 18:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:39.901 18:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:39.901 18:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:39.901 18:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:39.901 18:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:39.901 18:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:39.901 18:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:39.901 18:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:39.901 18:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:39.901 18:39:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.901 18:39:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.901 18:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:39.901 18:39:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.901 18:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:39.902 "name": "Existed_Raid", 00:08:39.902 "uuid": "14a01bae-3401-4f28-ba95-1aa686442412", 00:08:39.902 "strip_size_kb": 0, 00:08:39.902 "state": "configuring", 00:08:39.902 "raid_level": "raid1", 00:08:39.902 "superblock": true, 00:08:39.902 "num_base_bdevs": 3, 00:08:39.902 "num_base_bdevs_discovered": 1, 00:08:39.902 "num_base_bdevs_operational": 3, 00:08:39.902 "base_bdevs_list": [ 00:08:39.902 { 00:08:39.902 "name": null, 00:08:39.902 "uuid": "d96922a4-a087-4890-809a-33b916652cda", 00:08:39.902 "is_configured": false, 00:08:39.902 "data_offset": 0, 00:08:39.902 "data_size": 63488 00:08:39.902 }, 00:08:39.902 { 00:08:39.902 "name": null, 00:08:39.902 "uuid": "22105ead-faec-457d-8b87-0ca6470ff574", 00:08:39.902 "is_configured": false, 00:08:39.902 "data_offset": 0, 00:08:39.902 "data_size": 63488 00:08:39.902 }, 00:08:39.902 { 00:08:39.902 "name": "BaseBdev3", 00:08:39.902 "uuid": "4ca447ae-f5b8-44e2-8a0c-e2dc1429c362", 00:08:39.902 "is_configured": true, 00:08:39.902 "data_offset": 2048, 00:08:39.902 "data_size": 63488 00:08:39.902 } 00:08:39.902 ] 00:08:39.902 }' 00:08:39.902 18:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:39.902 18:39:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.161 18:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:40.161 18:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:40.161 18:39:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.161 18:39:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.161 18:39:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.421 18:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:40.421 18:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:40.421 18:39:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.421 18:39:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.421 [2024-12-15 18:39:40.610074] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:40.421 18:39:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.422 18:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:40.422 18:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:40.422 18:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:40.422 18:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:40.422 18:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:40.422 18:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:40.422 18:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:40.422 18:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:40.422 18:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:40.422 18:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:40.422 18:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:40.422 18:39:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.422 18:39:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.422 18:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:40.422 18:39:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.422 18:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:40.422 "name": "Existed_Raid", 00:08:40.422 "uuid": "14a01bae-3401-4f28-ba95-1aa686442412", 00:08:40.422 "strip_size_kb": 0, 00:08:40.422 "state": "configuring", 00:08:40.422 "raid_level": "raid1", 00:08:40.422 "superblock": true, 00:08:40.422 "num_base_bdevs": 3, 00:08:40.422 "num_base_bdevs_discovered": 2, 00:08:40.422 "num_base_bdevs_operational": 3, 00:08:40.422 "base_bdevs_list": [ 00:08:40.422 { 00:08:40.422 "name": null, 00:08:40.422 "uuid": "d96922a4-a087-4890-809a-33b916652cda", 00:08:40.422 "is_configured": false, 00:08:40.422 "data_offset": 0, 00:08:40.422 "data_size": 63488 00:08:40.422 }, 00:08:40.422 { 00:08:40.422 "name": "BaseBdev2", 00:08:40.422 "uuid": "22105ead-faec-457d-8b87-0ca6470ff574", 00:08:40.422 "is_configured": true, 00:08:40.422 "data_offset": 2048, 00:08:40.422 "data_size": 63488 00:08:40.422 }, 00:08:40.422 { 00:08:40.422 "name": "BaseBdev3", 00:08:40.422 "uuid": "4ca447ae-f5b8-44e2-8a0c-e2dc1429c362", 00:08:40.422 "is_configured": true, 00:08:40.422 "data_offset": 2048, 00:08:40.422 "data_size": 63488 00:08:40.422 } 00:08:40.422 ] 00:08:40.422 }' 00:08:40.422 18:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:40.422 18:39:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.681 18:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:40.681 18:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:40.681 18:39:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.681 18:39:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.681 18:39:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.682 18:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:40.682 18:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:40.682 18:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.682 18:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.682 18:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:40.682 18:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.682 18:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u d96922a4-a087-4890-809a-33b916652cda 00:08:40.682 18:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.682 18:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.682 NewBaseBdev 00:08:40.682 [2024-12-15 18:39:41.064433] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:40.682 [2024-12-15 18:39:41.064612] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:08:40.682 [2024-12-15 18:39:41.064626] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:40.682 [2024-12-15 18:39:41.064942] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:08:40.682 [2024-12-15 18:39:41.065098] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:08:40.682 [2024-12-15 18:39:41.065128] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:08:40.682 [2024-12-15 18:39:41.065255] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:40.682 18:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.682 18:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:40.682 18:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:08:40.682 18:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:40.682 18:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:40.682 18:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:40.682 18:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:40.682 18:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:40.682 18:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.682 18:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.682 18:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.682 18:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:40.682 18:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.682 18:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.682 [ 00:08:40.682 { 00:08:40.682 "name": "NewBaseBdev", 00:08:40.682 "aliases": [ 00:08:40.682 "d96922a4-a087-4890-809a-33b916652cda" 00:08:40.682 ], 00:08:40.682 "product_name": "Malloc disk", 00:08:40.682 "block_size": 512, 00:08:40.682 "num_blocks": 65536, 00:08:40.682 "uuid": "d96922a4-a087-4890-809a-33b916652cda", 00:08:40.682 "assigned_rate_limits": { 00:08:40.682 "rw_ios_per_sec": 0, 00:08:40.682 "rw_mbytes_per_sec": 0, 00:08:40.682 "r_mbytes_per_sec": 0, 00:08:40.682 "w_mbytes_per_sec": 0 00:08:40.682 }, 00:08:40.682 "claimed": true, 00:08:40.682 "claim_type": "exclusive_write", 00:08:40.682 "zoned": false, 00:08:40.682 "supported_io_types": { 00:08:40.682 "read": true, 00:08:40.682 "write": true, 00:08:40.682 "unmap": true, 00:08:40.682 "flush": true, 00:08:40.682 "reset": true, 00:08:40.682 "nvme_admin": false, 00:08:40.682 "nvme_io": false, 00:08:40.682 "nvme_io_md": false, 00:08:40.682 "write_zeroes": true, 00:08:40.682 "zcopy": true, 00:08:40.682 "get_zone_info": false, 00:08:40.682 "zone_management": false, 00:08:40.682 "zone_append": false, 00:08:40.682 "compare": false, 00:08:40.682 "compare_and_write": false, 00:08:40.682 "abort": true, 00:08:40.682 "seek_hole": false, 00:08:40.682 "seek_data": false, 00:08:40.682 "copy": true, 00:08:40.682 "nvme_iov_md": false 00:08:40.682 }, 00:08:40.682 "memory_domains": [ 00:08:40.682 { 00:08:40.682 "dma_device_id": "system", 00:08:40.682 "dma_device_type": 1 00:08:40.682 }, 00:08:40.682 { 00:08:40.682 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:40.682 "dma_device_type": 2 00:08:40.682 } 00:08:40.682 ], 00:08:40.682 "driver_specific": {} 00:08:40.682 } 00:08:40.682 ] 00:08:40.682 18:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.682 18:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:40.682 18:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:08:40.682 18:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:40.682 18:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:40.682 18:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:40.682 18:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:40.682 18:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:40.682 18:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:40.682 18:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:40.682 18:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:40.682 18:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:40.682 18:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:40.682 18:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:40.682 18:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.682 18:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.942 18:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.942 18:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:40.942 "name": "Existed_Raid", 00:08:40.942 "uuid": "14a01bae-3401-4f28-ba95-1aa686442412", 00:08:40.942 "strip_size_kb": 0, 00:08:40.942 "state": "online", 00:08:40.942 "raid_level": "raid1", 00:08:40.942 "superblock": true, 00:08:40.942 "num_base_bdevs": 3, 00:08:40.942 "num_base_bdevs_discovered": 3, 00:08:40.942 "num_base_bdevs_operational": 3, 00:08:40.942 "base_bdevs_list": [ 00:08:40.942 { 00:08:40.942 "name": "NewBaseBdev", 00:08:40.942 "uuid": "d96922a4-a087-4890-809a-33b916652cda", 00:08:40.942 "is_configured": true, 00:08:40.942 "data_offset": 2048, 00:08:40.942 "data_size": 63488 00:08:40.942 }, 00:08:40.942 { 00:08:40.942 "name": "BaseBdev2", 00:08:40.942 "uuid": "22105ead-faec-457d-8b87-0ca6470ff574", 00:08:40.942 "is_configured": true, 00:08:40.942 "data_offset": 2048, 00:08:40.942 "data_size": 63488 00:08:40.942 }, 00:08:40.942 { 00:08:40.942 "name": "BaseBdev3", 00:08:40.942 "uuid": "4ca447ae-f5b8-44e2-8a0c-e2dc1429c362", 00:08:40.942 "is_configured": true, 00:08:40.942 "data_offset": 2048, 00:08:40.942 "data_size": 63488 00:08:40.942 } 00:08:40.942 ] 00:08:40.942 }' 00:08:40.942 18:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:40.942 18:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.202 18:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:41.202 18:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:41.202 18:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:41.202 18:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:41.202 18:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:41.202 18:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:41.202 18:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:41.202 18:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:41.202 18:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.202 18:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.202 [2024-12-15 18:39:41.480484] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:41.202 18:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.202 18:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:41.202 "name": "Existed_Raid", 00:08:41.202 "aliases": [ 00:08:41.202 "14a01bae-3401-4f28-ba95-1aa686442412" 00:08:41.202 ], 00:08:41.202 "product_name": "Raid Volume", 00:08:41.202 "block_size": 512, 00:08:41.202 "num_blocks": 63488, 00:08:41.202 "uuid": "14a01bae-3401-4f28-ba95-1aa686442412", 00:08:41.202 "assigned_rate_limits": { 00:08:41.202 "rw_ios_per_sec": 0, 00:08:41.202 "rw_mbytes_per_sec": 0, 00:08:41.202 "r_mbytes_per_sec": 0, 00:08:41.202 "w_mbytes_per_sec": 0 00:08:41.202 }, 00:08:41.202 "claimed": false, 00:08:41.202 "zoned": false, 00:08:41.202 "supported_io_types": { 00:08:41.202 "read": true, 00:08:41.202 "write": true, 00:08:41.202 "unmap": false, 00:08:41.202 "flush": false, 00:08:41.202 "reset": true, 00:08:41.202 "nvme_admin": false, 00:08:41.202 "nvme_io": false, 00:08:41.202 "nvme_io_md": false, 00:08:41.202 "write_zeroes": true, 00:08:41.202 "zcopy": false, 00:08:41.202 "get_zone_info": false, 00:08:41.202 "zone_management": false, 00:08:41.202 "zone_append": false, 00:08:41.202 "compare": false, 00:08:41.202 "compare_and_write": false, 00:08:41.202 "abort": false, 00:08:41.202 "seek_hole": false, 00:08:41.202 "seek_data": false, 00:08:41.202 "copy": false, 00:08:41.202 "nvme_iov_md": false 00:08:41.202 }, 00:08:41.202 "memory_domains": [ 00:08:41.202 { 00:08:41.202 "dma_device_id": "system", 00:08:41.202 "dma_device_type": 1 00:08:41.202 }, 00:08:41.202 { 00:08:41.202 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:41.202 "dma_device_type": 2 00:08:41.202 }, 00:08:41.202 { 00:08:41.202 "dma_device_id": "system", 00:08:41.202 "dma_device_type": 1 00:08:41.202 }, 00:08:41.202 { 00:08:41.202 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:41.202 "dma_device_type": 2 00:08:41.202 }, 00:08:41.202 { 00:08:41.202 "dma_device_id": "system", 00:08:41.202 "dma_device_type": 1 00:08:41.202 }, 00:08:41.202 { 00:08:41.202 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:41.202 "dma_device_type": 2 00:08:41.202 } 00:08:41.202 ], 00:08:41.202 "driver_specific": { 00:08:41.202 "raid": { 00:08:41.202 "uuid": "14a01bae-3401-4f28-ba95-1aa686442412", 00:08:41.202 "strip_size_kb": 0, 00:08:41.202 "state": "online", 00:08:41.202 "raid_level": "raid1", 00:08:41.202 "superblock": true, 00:08:41.202 "num_base_bdevs": 3, 00:08:41.202 "num_base_bdevs_discovered": 3, 00:08:41.202 "num_base_bdevs_operational": 3, 00:08:41.202 "base_bdevs_list": [ 00:08:41.202 { 00:08:41.202 "name": "NewBaseBdev", 00:08:41.202 "uuid": "d96922a4-a087-4890-809a-33b916652cda", 00:08:41.202 "is_configured": true, 00:08:41.202 "data_offset": 2048, 00:08:41.202 "data_size": 63488 00:08:41.202 }, 00:08:41.202 { 00:08:41.202 "name": "BaseBdev2", 00:08:41.202 "uuid": "22105ead-faec-457d-8b87-0ca6470ff574", 00:08:41.202 "is_configured": true, 00:08:41.202 "data_offset": 2048, 00:08:41.202 "data_size": 63488 00:08:41.202 }, 00:08:41.202 { 00:08:41.202 "name": "BaseBdev3", 00:08:41.202 "uuid": "4ca447ae-f5b8-44e2-8a0c-e2dc1429c362", 00:08:41.202 "is_configured": true, 00:08:41.202 "data_offset": 2048, 00:08:41.202 "data_size": 63488 00:08:41.202 } 00:08:41.202 ] 00:08:41.202 } 00:08:41.202 } 00:08:41.202 }' 00:08:41.202 18:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:41.202 18:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:41.202 BaseBdev2 00:08:41.202 BaseBdev3' 00:08:41.202 18:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:41.202 18:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:41.203 18:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:41.203 18:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:41.203 18:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:41.203 18:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.203 18:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.203 18:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.203 18:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:41.203 18:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:41.203 18:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:41.203 18:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:41.203 18:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:41.203 18:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.203 18:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.463 18:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.463 18:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:41.463 18:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:41.463 18:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:41.463 18:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:41.463 18:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:41.463 18:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.463 18:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.463 18:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.463 18:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:41.463 18:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:41.463 18:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:41.463 18:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.463 18:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.463 [2024-12-15 18:39:41.703769] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:41.463 [2024-12-15 18:39:41.703853] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:41.463 [2024-12-15 18:39:41.703924] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:41.463 [2024-12-15 18:39:41.704166] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:41.463 [2024-12-15 18:39:41.704177] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:08:41.463 18:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.463 18:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 80998 00:08:41.463 18:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 80998 ']' 00:08:41.463 18:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 80998 00:08:41.463 18:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:08:41.463 18:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:41.463 18:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80998 00:08:41.463 killing process with pid 80998 00:08:41.463 18:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:41.463 18:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:41.463 18:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80998' 00:08:41.463 18:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 80998 00:08:41.463 [2024-12-15 18:39:41.749176] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:41.463 18:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 80998 00:08:41.463 [2024-12-15 18:39:41.781194] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:41.723 18:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:41.723 00:08:41.723 real 0m8.125s 00:08:41.723 user 0m13.731s 00:08:41.723 sys 0m1.757s 00:08:41.723 ************************************ 00:08:41.723 END TEST raid_state_function_test_sb 00:08:41.723 ************************************ 00:08:41.723 18:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:41.723 18:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.723 18:39:42 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:08:41.723 18:39:42 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:41.723 18:39:42 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:41.723 18:39:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:41.723 ************************************ 00:08:41.723 START TEST raid_superblock_test 00:08:41.723 ************************************ 00:08:41.723 18:39:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 3 00:08:41.723 18:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:08:41.723 18:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:08:41.723 18:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:41.723 18:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:41.723 18:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:41.723 18:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:41.723 18:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:41.723 18:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:41.723 18:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:41.723 18:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:41.723 18:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:41.723 18:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:41.723 18:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:41.723 18:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:08:41.723 18:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:08:41.723 18:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=81591 00:08:41.723 18:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:41.723 18:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 81591 00:08:41.723 18:39:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 81591 ']' 00:08:41.723 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:41.723 18:39:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:41.723 18:39:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:41.723 18:39:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:41.723 18:39:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:41.723 18:39:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.723 [2024-12-15 18:39:42.138613] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:08:41.723 [2024-12-15 18:39:42.138819] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81591 ] 00:08:41.983 [2024-12-15 18:39:42.307686] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:41.983 [2024-12-15 18:39:42.335325] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.983 [2024-12-15 18:39:42.378890] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:41.983 [2024-12-15 18:39:42.378995] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:42.553 18:39:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:42.553 18:39:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:08:42.553 18:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:42.553 18:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:42.553 18:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:42.553 18:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:42.553 18:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:42.553 18:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:42.553 18:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:42.553 18:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:42.553 18:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:42.553 18:39:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.553 18:39:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.815 malloc1 00:08:42.815 18:39:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.815 18:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:42.815 18:39:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.815 18:39:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.815 [2024-12-15 18:39:42.999693] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:42.815 [2024-12-15 18:39:42.999816] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:42.815 [2024-12-15 18:39:42.999856] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:42.815 [2024-12-15 18:39:42.999892] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:42.815 [2024-12-15 18:39:43.002090] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:42.815 [2024-12-15 18:39:43.002276] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:42.815 pt1 00:08:42.815 18:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.815 18:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:42.815 18:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:42.815 18:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:42.815 18:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:42.815 18:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:42.815 18:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:42.815 18:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:42.815 18:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:42.815 18:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:42.815 18:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.815 18:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.815 malloc2 00:08:42.815 18:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.815 18:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:42.815 18:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.815 18:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.815 [2024-12-15 18:39:43.028587] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:42.815 [2024-12-15 18:39:43.028843] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:42.815 [2024-12-15 18:39:43.028954] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:42.815 [2024-12-15 18:39:43.029051] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:42.815 [2024-12-15 18:39:43.031247] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:42.815 [2024-12-15 18:39:43.031374] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:42.815 pt2 00:08:42.815 18:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.815 18:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:42.815 18:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:42.815 18:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:08:42.815 18:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:08:42.815 18:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:08:42.815 18:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:42.815 18:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:42.815 18:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:42.815 18:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:08:42.815 18:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.815 18:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.815 malloc3 00:08:42.815 18:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.815 18:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:42.815 18:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.815 18:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.815 [2024-12-15 18:39:43.057220] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:42.815 [2024-12-15 18:39:43.057420] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:42.815 [2024-12-15 18:39:43.057513] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:42.815 [2024-12-15 18:39:43.057625] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:42.815 [2024-12-15 18:39:43.059685] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:42.815 [2024-12-15 18:39:43.059830] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:42.815 pt3 00:08:42.815 18:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.815 18:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:42.815 18:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:42.815 18:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:08:42.815 18:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.815 18:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.815 [2024-12-15 18:39:43.069237] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:42.815 [2024-12-15 18:39:43.071139] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:42.815 [2024-12-15 18:39:43.071234] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:42.815 [2024-12-15 18:39:43.071390] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:08:42.815 [2024-12-15 18:39:43.071432] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:42.815 [2024-12-15 18:39:43.071722] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:08:42.815 [2024-12-15 18:39:43.071920] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:08:42.815 [2024-12-15 18:39:43.071965] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:08:42.815 [2024-12-15 18:39:43.072118] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:42.815 18:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.815 18:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:08:42.815 18:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:42.815 18:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:42.815 18:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:42.815 18:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:42.815 18:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:42.815 18:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:42.815 18:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:42.815 18:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:42.815 18:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:42.815 18:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:42.815 18:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:42.815 18:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.815 18:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.815 18:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.815 18:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:42.815 "name": "raid_bdev1", 00:08:42.815 "uuid": "c8c5d1e7-2d14-4412-a203-72ffd00a3069", 00:08:42.815 "strip_size_kb": 0, 00:08:42.815 "state": "online", 00:08:42.815 "raid_level": "raid1", 00:08:42.815 "superblock": true, 00:08:42.815 "num_base_bdevs": 3, 00:08:42.815 "num_base_bdevs_discovered": 3, 00:08:42.815 "num_base_bdevs_operational": 3, 00:08:42.815 "base_bdevs_list": [ 00:08:42.815 { 00:08:42.815 "name": "pt1", 00:08:42.815 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:42.815 "is_configured": true, 00:08:42.815 "data_offset": 2048, 00:08:42.815 "data_size": 63488 00:08:42.815 }, 00:08:42.815 { 00:08:42.815 "name": "pt2", 00:08:42.815 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:42.815 "is_configured": true, 00:08:42.815 "data_offset": 2048, 00:08:42.815 "data_size": 63488 00:08:42.816 }, 00:08:42.816 { 00:08:42.816 "name": "pt3", 00:08:42.816 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:42.816 "is_configured": true, 00:08:42.816 "data_offset": 2048, 00:08:42.816 "data_size": 63488 00:08:42.816 } 00:08:42.816 ] 00:08:42.816 }' 00:08:42.816 18:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:42.816 18:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.078 18:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:43.078 18:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:43.078 18:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:43.078 18:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:43.078 18:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:43.078 18:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:43.078 18:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:43.078 18:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:43.078 18:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.078 18:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.078 [2024-12-15 18:39:43.452871] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:43.078 18:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.078 18:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:43.078 "name": "raid_bdev1", 00:08:43.078 "aliases": [ 00:08:43.078 "c8c5d1e7-2d14-4412-a203-72ffd00a3069" 00:08:43.078 ], 00:08:43.078 "product_name": "Raid Volume", 00:08:43.078 "block_size": 512, 00:08:43.078 "num_blocks": 63488, 00:08:43.078 "uuid": "c8c5d1e7-2d14-4412-a203-72ffd00a3069", 00:08:43.078 "assigned_rate_limits": { 00:08:43.078 "rw_ios_per_sec": 0, 00:08:43.078 "rw_mbytes_per_sec": 0, 00:08:43.078 "r_mbytes_per_sec": 0, 00:08:43.078 "w_mbytes_per_sec": 0 00:08:43.078 }, 00:08:43.078 "claimed": false, 00:08:43.078 "zoned": false, 00:08:43.078 "supported_io_types": { 00:08:43.078 "read": true, 00:08:43.078 "write": true, 00:08:43.078 "unmap": false, 00:08:43.078 "flush": false, 00:08:43.078 "reset": true, 00:08:43.078 "nvme_admin": false, 00:08:43.078 "nvme_io": false, 00:08:43.078 "nvme_io_md": false, 00:08:43.078 "write_zeroes": true, 00:08:43.078 "zcopy": false, 00:08:43.078 "get_zone_info": false, 00:08:43.078 "zone_management": false, 00:08:43.078 "zone_append": false, 00:08:43.078 "compare": false, 00:08:43.078 "compare_and_write": false, 00:08:43.078 "abort": false, 00:08:43.078 "seek_hole": false, 00:08:43.078 "seek_data": false, 00:08:43.078 "copy": false, 00:08:43.078 "nvme_iov_md": false 00:08:43.078 }, 00:08:43.078 "memory_domains": [ 00:08:43.078 { 00:08:43.078 "dma_device_id": "system", 00:08:43.078 "dma_device_type": 1 00:08:43.078 }, 00:08:43.078 { 00:08:43.078 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:43.078 "dma_device_type": 2 00:08:43.078 }, 00:08:43.078 { 00:08:43.078 "dma_device_id": "system", 00:08:43.078 "dma_device_type": 1 00:08:43.078 }, 00:08:43.078 { 00:08:43.078 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:43.078 "dma_device_type": 2 00:08:43.078 }, 00:08:43.078 { 00:08:43.078 "dma_device_id": "system", 00:08:43.078 "dma_device_type": 1 00:08:43.078 }, 00:08:43.078 { 00:08:43.078 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:43.078 "dma_device_type": 2 00:08:43.078 } 00:08:43.078 ], 00:08:43.078 "driver_specific": { 00:08:43.078 "raid": { 00:08:43.078 "uuid": "c8c5d1e7-2d14-4412-a203-72ffd00a3069", 00:08:43.078 "strip_size_kb": 0, 00:08:43.078 "state": "online", 00:08:43.078 "raid_level": "raid1", 00:08:43.078 "superblock": true, 00:08:43.078 "num_base_bdevs": 3, 00:08:43.078 "num_base_bdevs_discovered": 3, 00:08:43.078 "num_base_bdevs_operational": 3, 00:08:43.078 "base_bdevs_list": [ 00:08:43.078 { 00:08:43.078 "name": "pt1", 00:08:43.078 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:43.078 "is_configured": true, 00:08:43.078 "data_offset": 2048, 00:08:43.078 "data_size": 63488 00:08:43.078 }, 00:08:43.078 { 00:08:43.078 "name": "pt2", 00:08:43.078 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:43.078 "is_configured": true, 00:08:43.078 "data_offset": 2048, 00:08:43.078 "data_size": 63488 00:08:43.078 }, 00:08:43.078 { 00:08:43.078 "name": "pt3", 00:08:43.078 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:43.078 "is_configured": true, 00:08:43.078 "data_offset": 2048, 00:08:43.078 "data_size": 63488 00:08:43.078 } 00:08:43.078 ] 00:08:43.078 } 00:08:43.078 } 00:08:43.078 }' 00:08:43.078 18:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:43.338 18:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:43.338 pt2 00:08:43.338 pt3' 00:08:43.338 18:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:43.338 18:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:43.338 18:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:43.338 18:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:43.338 18:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:43.338 18:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.338 18:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.338 18:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.338 18:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:43.338 18:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:43.338 18:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:43.338 18:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:43.338 18:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:43.338 18:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.338 18:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.338 18:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.338 18:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:43.338 18:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:43.338 18:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:43.338 18:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:08:43.338 18:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:43.338 18:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.338 18:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.338 18:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.338 18:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:43.338 18:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:43.338 18:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:43.338 18:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:43.338 18:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.338 18:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.338 [2024-12-15 18:39:43.704441] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:43.338 18:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.338 18:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=c8c5d1e7-2d14-4412-a203-72ffd00a3069 00:08:43.338 18:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z c8c5d1e7-2d14-4412-a203-72ffd00a3069 ']' 00:08:43.338 18:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:43.338 18:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.338 18:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.338 [2024-12-15 18:39:43.732098] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:43.338 [2024-12-15 18:39:43.732160] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:43.338 [2024-12-15 18:39:43.732266] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:43.338 [2024-12-15 18:39:43.732404] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:43.338 [2024-12-15 18:39:43.732470] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:08:43.338 18:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.338 18:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:43.338 18:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:43.338 18:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.338 18:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.338 18:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.598 18:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:43.598 18:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:43.598 18:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:43.598 18:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:43.598 18:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.598 18:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.598 18:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.598 18:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:43.598 18:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:43.598 18:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.598 18:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.598 18:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.598 18:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:43.598 18:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:08:43.598 18:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.598 18:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.598 18:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.598 18:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:43.598 18:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.598 18:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.598 18:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:43.598 18:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.598 18:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:43.598 18:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:43.598 18:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:08:43.599 18:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:43.599 18:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:08:43.599 18:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:43.599 18:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:08:43.599 18:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:43.599 18:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:43.599 18:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.599 18:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.599 [2024-12-15 18:39:43.891851] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:43.599 [2024-12-15 18:39:43.893871] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:43.599 [2024-12-15 18:39:43.893964] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:08:43.599 [2024-12-15 18:39:43.894057] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:43.599 [2024-12-15 18:39:43.894150] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:43.599 [2024-12-15 18:39:43.894210] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:08:43.599 [2024-12-15 18:39:43.894270] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:43.599 [2024-12-15 18:39:43.894301] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:08:43.599 request: 00:08:43.599 { 00:08:43.599 "name": "raid_bdev1", 00:08:43.599 "raid_level": "raid1", 00:08:43.599 "base_bdevs": [ 00:08:43.599 "malloc1", 00:08:43.599 "malloc2", 00:08:43.599 "malloc3" 00:08:43.599 ], 00:08:43.599 "superblock": false, 00:08:43.599 "method": "bdev_raid_create", 00:08:43.599 "req_id": 1 00:08:43.599 } 00:08:43.599 Got JSON-RPC error response 00:08:43.599 response: 00:08:43.599 { 00:08:43.599 "code": -17, 00:08:43.599 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:43.599 } 00:08:43.599 18:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:43.599 18:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:08:43.599 18:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:43.599 18:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:43.599 18:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:43.599 18:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:43.599 18:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.599 18:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.599 18:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:43.599 18:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.599 18:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:43.599 18:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:43.599 18:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:43.599 18:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.599 18:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.599 [2024-12-15 18:39:43.959685] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:43.599 [2024-12-15 18:39:43.959772] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:43.599 [2024-12-15 18:39:43.959813] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:08:43.599 [2024-12-15 18:39:43.959844] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:43.599 [2024-12-15 18:39:43.962060] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:43.599 [2024-12-15 18:39:43.962134] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:43.599 [2024-12-15 18:39:43.962225] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:43.599 [2024-12-15 18:39:43.962278] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:43.599 pt1 00:08:43.599 18:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.599 18:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:08:43.599 18:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:43.599 18:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:43.599 18:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:43.599 18:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:43.599 18:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:43.599 18:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:43.599 18:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:43.599 18:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:43.599 18:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:43.599 18:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:43.599 18:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.599 18:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.599 18:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:43.599 18:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.599 18:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:43.599 "name": "raid_bdev1", 00:08:43.599 "uuid": "c8c5d1e7-2d14-4412-a203-72ffd00a3069", 00:08:43.599 "strip_size_kb": 0, 00:08:43.599 "state": "configuring", 00:08:43.599 "raid_level": "raid1", 00:08:43.599 "superblock": true, 00:08:43.599 "num_base_bdevs": 3, 00:08:43.599 "num_base_bdevs_discovered": 1, 00:08:43.599 "num_base_bdevs_operational": 3, 00:08:43.599 "base_bdevs_list": [ 00:08:43.599 { 00:08:43.599 "name": "pt1", 00:08:43.599 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:43.599 "is_configured": true, 00:08:43.599 "data_offset": 2048, 00:08:43.599 "data_size": 63488 00:08:43.599 }, 00:08:43.599 { 00:08:43.599 "name": null, 00:08:43.599 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:43.599 "is_configured": false, 00:08:43.599 "data_offset": 2048, 00:08:43.599 "data_size": 63488 00:08:43.599 }, 00:08:43.599 { 00:08:43.599 "name": null, 00:08:43.599 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:43.599 "is_configured": false, 00:08:43.599 "data_offset": 2048, 00:08:43.599 "data_size": 63488 00:08:43.599 } 00:08:43.599 ] 00:08:43.599 }' 00:08:43.599 18:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:43.599 18:39:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.169 18:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:08:44.169 18:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:44.169 18:39:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.169 18:39:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.169 [2024-12-15 18:39:44.394983] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:44.169 [2024-12-15 18:39:44.395091] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:44.169 [2024-12-15 18:39:44.395128] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:08:44.169 [2024-12-15 18:39:44.395161] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:44.169 [2024-12-15 18:39:44.395600] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:44.169 [2024-12-15 18:39:44.395670] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:44.169 [2024-12-15 18:39:44.395791] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:44.169 [2024-12-15 18:39:44.395864] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:44.169 pt2 00:08:44.169 18:39:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.169 18:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:08:44.169 18:39:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.169 18:39:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.169 [2024-12-15 18:39:44.406959] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:08:44.169 18:39:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.169 18:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:08:44.169 18:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:44.169 18:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:44.169 18:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:44.169 18:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:44.169 18:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:44.169 18:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:44.169 18:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:44.169 18:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:44.169 18:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:44.169 18:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:44.169 18:39:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.169 18:39:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.169 18:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:44.169 18:39:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.169 18:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:44.169 "name": "raid_bdev1", 00:08:44.169 "uuid": "c8c5d1e7-2d14-4412-a203-72ffd00a3069", 00:08:44.169 "strip_size_kb": 0, 00:08:44.169 "state": "configuring", 00:08:44.169 "raid_level": "raid1", 00:08:44.169 "superblock": true, 00:08:44.169 "num_base_bdevs": 3, 00:08:44.169 "num_base_bdevs_discovered": 1, 00:08:44.169 "num_base_bdevs_operational": 3, 00:08:44.169 "base_bdevs_list": [ 00:08:44.169 { 00:08:44.169 "name": "pt1", 00:08:44.169 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:44.169 "is_configured": true, 00:08:44.169 "data_offset": 2048, 00:08:44.169 "data_size": 63488 00:08:44.169 }, 00:08:44.169 { 00:08:44.169 "name": null, 00:08:44.169 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:44.169 "is_configured": false, 00:08:44.169 "data_offset": 0, 00:08:44.169 "data_size": 63488 00:08:44.169 }, 00:08:44.169 { 00:08:44.169 "name": null, 00:08:44.169 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:44.169 "is_configured": false, 00:08:44.169 "data_offset": 2048, 00:08:44.169 "data_size": 63488 00:08:44.169 } 00:08:44.169 ] 00:08:44.169 }' 00:08:44.169 18:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:44.169 18:39:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.429 18:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:44.429 18:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:44.429 18:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:44.429 18:39:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.429 18:39:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.429 [2024-12-15 18:39:44.790325] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:44.429 [2024-12-15 18:39:44.790451] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:44.429 [2024-12-15 18:39:44.790494] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:08:44.429 [2024-12-15 18:39:44.790523] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:44.429 [2024-12-15 18:39:44.790978] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:44.429 [2024-12-15 18:39:44.791033] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:44.429 [2024-12-15 18:39:44.791141] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:44.429 [2024-12-15 18:39:44.791193] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:44.429 pt2 00:08:44.429 18:39:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.429 18:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:44.429 18:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:44.429 18:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:44.429 18:39:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.429 18:39:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.429 [2024-12-15 18:39:44.802258] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:44.429 [2024-12-15 18:39:44.802335] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:44.429 [2024-12-15 18:39:44.802386] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:08:44.429 [2024-12-15 18:39:44.802412] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:44.429 [2024-12-15 18:39:44.802763] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:44.429 [2024-12-15 18:39:44.802829] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:44.429 [2024-12-15 18:39:44.802917] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:08:44.429 [2024-12-15 18:39:44.802974] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:44.429 [2024-12-15 18:39:44.803107] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:08:44.429 [2024-12-15 18:39:44.803144] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:44.429 [2024-12-15 18:39:44.803381] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:44.429 [2024-12-15 18:39:44.803524] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:08:44.429 [2024-12-15 18:39:44.803562] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:08:44.429 [2024-12-15 18:39:44.803693] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:44.429 pt3 00:08:44.429 18:39:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.429 18:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:44.429 18:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:44.429 18:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:08:44.429 18:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:44.429 18:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:44.429 18:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:44.429 18:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:44.429 18:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:44.429 18:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:44.429 18:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:44.429 18:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:44.429 18:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:44.429 18:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:44.429 18:39:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.429 18:39:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.429 18:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:44.429 18:39:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.429 18:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:44.429 "name": "raid_bdev1", 00:08:44.429 "uuid": "c8c5d1e7-2d14-4412-a203-72ffd00a3069", 00:08:44.430 "strip_size_kb": 0, 00:08:44.430 "state": "online", 00:08:44.430 "raid_level": "raid1", 00:08:44.430 "superblock": true, 00:08:44.430 "num_base_bdevs": 3, 00:08:44.430 "num_base_bdevs_discovered": 3, 00:08:44.430 "num_base_bdevs_operational": 3, 00:08:44.430 "base_bdevs_list": [ 00:08:44.430 { 00:08:44.430 "name": "pt1", 00:08:44.430 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:44.430 "is_configured": true, 00:08:44.430 "data_offset": 2048, 00:08:44.430 "data_size": 63488 00:08:44.430 }, 00:08:44.430 { 00:08:44.430 "name": "pt2", 00:08:44.430 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:44.430 "is_configured": true, 00:08:44.430 "data_offset": 2048, 00:08:44.430 "data_size": 63488 00:08:44.430 }, 00:08:44.430 { 00:08:44.430 "name": "pt3", 00:08:44.430 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:44.430 "is_configured": true, 00:08:44.430 "data_offset": 2048, 00:08:44.430 "data_size": 63488 00:08:44.430 } 00:08:44.430 ] 00:08:44.430 }' 00:08:44.430 18:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:44.430 18:39:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.999 18:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:44.999 18:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:44.999 18:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:44.999 18:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:44.999 18:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:44.999 18:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:44.999 18:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:44.999 18:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:44.999 18:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.999 18:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.999 [2024-12-15 18:39:45.169942] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:44.999 18:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.999 18:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:44.999 "name": "raid_bdev1", 00:08:44.999 "aliases": [ 00:08:44.999 "c8c5d1e7-2d14-4412-a203-72ffd00a3069" 00:08:44.999 ], 00:08:44.999 "product_name": "Raid Volume", 00:08:44.999 "block_size": 512, 00:08:44.999 "num_blocks": 63488, 00:08:44.999 "uuid": "c8c5d1e7-2d14-4412-a203-72ffd00a3069", 00:08:44.999 "assigned_rate_limits": { 00:08:44.999 "rw_ios_per_sec": 0, 00:08:44.999 "rw_mbytes_per_sec": 0, 00:08:44.999 "r_mbytes_per_sec": 0, 00:08:44.999 "w_mbytes_per_sec": 0 00:08:44.999 }, 00:08:44.999 "claimed": false, 00:08:44.999 "zoned": false, 00:08:44.999 "supported_io_types": { 00:08:44.999 "read": true, 00:08:44.999 "write": true, 00:08:44.999 "unmap": false, 00:08:44.999 "flush": false, 00:08:44.999 "reset": true, 00:08:44.999 "nvme_admin": false, 00:08:44.999 "nvme_io": false, 00:08:44.999 "nvme_io_md": false, 00:08:44.999 "write_zeroes": true, 00:08:44.999 "zcopy": false, 00:08:44.999 "get_zone_info": false, 00:08:44.999 "zone_management": false, 00:08:44.999 "zone_append": false, 00:08:44.999 "compare": false, 00:08:44.999 "compare_and_write": false, 00:08:44.999 "abort": false, 00:08:44.999 "seek_hole": false, 00:08:44.999 "seek_data": false, 00:08:44.999 "copy": false, 00:08:44.999 "nvme_iov_md": false 00:08:44.999 }, 00:08:44.999 "memory_domains": [ 00:08:44.999 { 00:08:44.999 "dma_device_id": "system", 00:08:44.999 "dma_device_type": 1 00:08:44.999 }, 00:08:44.999 { 00:08:44.999 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:44.999 "dma_device_type": 2 00:08:44.999 }, 00:08:44.999 { 00:08:44.999 "dma_device_id": "system", 00:08:44.999 "dma_device_type": 1 00:08:44.999 }, 00:08:44.999 { 00:08:44.999 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:44.999 "dma_device_type": 2 00:08:44.999 }, 00:08:44.999 { 00:08:44.999 "dma_device_id": "system", 00:08:44.999 "dma_device_type": 1 00:08:44.999 }, 00:08:44.999 { 00:08:44.999 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:44.999 "dma_device_type": 2 00:08:44.999 } 00:08:44.999 ], 00:08:44.999 "driver_specific": { 00:08:44.999 "raid": { 00:08:44.999 "uuid": "c8c5d1e7-2d14-4412-a203-72ffd00a3069", 00:08:44.999 "strip_size_kb": 0, 00:08:44.999 "state": "online", 00:08:44.999 "raid_level": "raid1", 00:08:44.999 "superblock": true, 00:08:44.999 "num_base_bdevs": 3, 00:08:44.999 "num_base_bdevs_discovered": 3, 00:08:44.999 "num_base_bdevs_operational": 3, 00:08:44.999 "base_bdevs_list": [ 00:08:44.999 { 00:08:45.000 "name": "pt1", 00:08:45.000 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:45.000 "is_configured": true, 00:08:45.000 "data_offset": 2048, 00:08:45.000 "data_size": 63488 00:08:45.000 }, 00:08:45.000 { 00:08:45.000 "name": "pt2", 00:08:45.000 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:45.000 "is_configured": true, 00:08:45.000 "data_offset": 2048, 00:08:45.000 "data_size": 63488 00:08:45.000 }, 00:08:45.000 { 00:08:45.000 "name": "pt3", 00:08:45.000 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:45.000 "is_configured": true, 00:08:45.000 "data_offset": 2048, 00:08:45.000 "data_size": 63488 00:08:45.000 } 00:08:45.000 ] 00:08:45.000 } 00:08:45.000 } 00:08:45.000 }' 00:08:45.000 18:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:45.000 18:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:45.000 pt2 00:08:45.000 pt3' 00:08:45.000 18:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:45.000 18:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:45.000 18:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:45.000 18:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:45.000 18:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:45.000 18:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.000 18:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.000 18:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.000 18:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:45.000 18:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:45.000 18:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:45.000 18:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:45.000 18:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:45.000 18:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.000 18:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.000 18:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.000 18:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:45.000 18:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:45.000 18:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:45.000 18:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:08:45.000 18:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:45.000 18:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.000 18:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.000 18:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.000 18:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:45.000 18:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:45.000 18:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:45.000 18:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.000 18:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:45.000 18:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.000 [2024-12-15 18:39:45.409464] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:45.000 18:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.000 18:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' c8c5d1e7-2d14-4412-a203-72ffd00a3069 '!=' c8c5d1e7-2d14-4412-a203-72ffd00a3069 ']' 00:08:45.000 18:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:08:45.000 18:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:45.000 18:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:45.000 18:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:08:45.000 18:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.000 18:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.000 [2024-12-15 18:39:45.437213] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:08:45.260 18:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.260 18:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:45.260 18:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:45.260 18:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:45.260 18:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:45.260 18:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:45.260 18:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:45.260 18:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:45.260 18:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:45.260 18:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:45.260 18:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:45.260 18:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:45.260 18:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.260 18:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.260 18:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:45.260 18:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.260 18:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:45.260 "name": "raid_bdev1", 00:08:45.260 "uuid": "c8c5d1e7-2d14-4412-a203-72ffd00a3069", 00:08:45.260 "strip_size_kb": 0, 00:08:45.260 "state": "online", 00:08:45.260 "raid_level": "raid1", 00:08:45.260 "superblock": true, 00:08:45.260 "num_base_bdevs": 3, 00:08:45.260 "num_base_bdevs_discovered": 2, 00:08:45.260 "num_base_bdevs_operational": 2, 00:08:45.260 "base_bdevs_list": [ 00:08:45.260 { 00:08:45.260 "name": null, 00:08:45.260 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:45.260 "is_configured": false, 00:08:45.260 "data_offset": 0, 00:08:45.260 "data_size": 63488 00:08:45.260 }, 00:08:45.260 { 00:08:45.260 "name": "pt2", 00:08:45.260 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:45.260 "is_configured": true, 00:08:45.260 "data_offset": 2048, 00:08:45.260 "data_size": 63488 00:08:45.260 }, 00:08:45.260 { 00:08:45.260 "name": "pt3", 00:08:45.260 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:45.260 "is_configured": true, 00:08:45.260 "data_offset": 2048, 00:08:45.260 "data_size": 63488 00:08:45.260 } 00:08:45.260 ] 00:08:45.260 }' 00:08:45.260 18:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:45.260 18:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.520 18:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:45.520 18:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.520 18:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.520 [2024-12-15 18:39:45.832550] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:45.520 [2024-12-15 18:39:45.832634] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:45.520 [2024-12-15 18:39:45.832735] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:45.520 [2024-12-15 18:39:45.832830] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:45.520 [2024-12-15 18:39:45.832909] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:08:45.520 18:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.520 18:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:45.520 18:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:08:45.520 18:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.520 18:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.520 18:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.520 18:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:08:45.520 18:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:08:45.520 18:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:08:45.520 18:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:08:45.520 18:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:08:45.520 18:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.520 18:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.520 18:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.520 18:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:08:45.520 18:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:08:45.520 18:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:08:45.520 18:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.520 18:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.520 18:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.520 18:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:08:45.520 18:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:08:45.520 18:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:08:45.520 18:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:08:45.521 18:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:45.521 18:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.521 18:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.521 [2024-12-15 18:39:45.916456] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:45.521 [2024-12-15 18:39:45.916572] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:45.521 [2024-12-15 18:39:45.916614] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:08:45.521 [2024-12-15 18:39:45.916646] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:45.521 [2024-12-15 18:39:45.918922] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:45.521 [2024-12-15 18:39:45.918997] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:45.521 [2024-12-15 18:39:45.919116] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:45.521 [2024-12-15 18:39:45.919180] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:45.521 pt2 00:08:45.521 18:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.521 18:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:08:45.521 18:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:45.521 18:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:45.521 18:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:45.521 18:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:45.521 18:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:45.521 18:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:45.521 18:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:45.521 18:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:45.521 18:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:45.521 18:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:45.521 18:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:45.521 18:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.521 18:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.521 18:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.521 18:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:45.521 "name": "raid_bdev1", 00:08:45.521 "uuid": "c8c5d1e7-2d14-4412-a203-72ffd00a3069", 00:08:45.521 "strip_size_kb": 0, 00:08:45.521 "state": "configuring", 00:08:45.521 "raid_level": "raid1", 00:08:45.521 "superblock": true, 00:08:45.521 "num_base_bdevs": 3, 00:08:45.521 "num_base_bdevs_discovered": 1, 00:08:45.521 "num_base_bdevs_operational": 2, 00:08:45.521 "base_bdevs_list": [ 00:08:45.521 { 00:08:45.521 "name": null, 00:08:45.521 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:45.521 "is_configured": false, 00:08:45.521 "data_offset": 2048, 00:08:45.521 "data_size": 63488 00:08:45.521 }, 00:08:45.521 { 00:08:45.521 "name": "pt2", 00:08:45.521 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:45.521 "is_configured": true, 00:08:45.521 "data_offset": 2048, 00:08:45.521 "data_size": 63488 00:08:45.521 }, 00:08:45.521 { 00:08:45.521 "name": null, 00:08:45.521 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:45.521 "is_configured": false, 00:08:45.521 "data_offset": 2048, 00:08:45.521 "data_size": 63488 00:08:45.521 } 00:08:45.521 ] 00:08:45.521 }' 00:08:45.521 18:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:45.521 18:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.090 18:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:08:46.090 18:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:08:46.090 18:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:08:46.090 18:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:46.090 18:39:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.090 18:39:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.090 [2024-12-15 18:39:46.295875] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:46.090 [2024-12-15 18:39:46.295989] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:46.090 [2024-12-15 18:39:46.296031] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:08:46.091 [2024-12-15 18:39:46.296059] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:46.091 [2024-12-15 18:39:46.296515] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:46.091 [2024-12-15 18:39:46.296591] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:46.091 [2024-12-15 18:39:46.296719] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:08:46.091 [2024-12-15 18:39:46.296773] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:46.091 [2024-12-15 18:39:46.296909] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:08:46.091 [2024-12-15 18:39:46.296947] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:46.091 [2024-12-15 18:39:46.297229] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:46.091 [2024-12-15 18:39:46.297397] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:08:46.091 [2024-12-15 18:39:46.297444] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:08:46.091 [2024-12-15 18:39:46.297605] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:46.091 pt3 00:08:46.091 18:39:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.091 18:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:46.091 18:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:46.091 18:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:46.091 18:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:46.091 18:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:46.091 18:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:46.091 18:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:46.091 18:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:46.091 18:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:46.091 18:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:46.091 18:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:46.091 18:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:46.091 18:39:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.091 18:39:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.091 18:39:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.091 18:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:46.091 "name": "raid_bdev1", 00:08:46.091 "uuid": "c8c5d1e7-2d14-4412-a203-72ffd00a3069", 00:08:46.091 "strip_size_kb": 0, 00:08:46.091 "state": "online", 00:08:46.091 "raid_level": "raid1", 00:08:46.091 "superblock": true, 00:08:46.091 "num_base_bdevs": 3, 00:08:46.091 "num_base_bdevs_discovered": 2, 00:08:46.091 "num_base_bdevs_operational": 2, 00:08:46.091 "base_bdevs_list": [ 00:08:46.091 { 00:08:46.091 "name": null, 00:08:46.091 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:46.091 "is_configured": false, 00:08:46.091 "data_offset": 2048, 00:08:46.091 "data_size": 63488 00:08:46.091 }, 00:08:46.091 { 00:08:46.091 "name": "pt2", 00:08:46.091 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:46.091 "is_configured": true, 00:08:46.091 "data_offset": 2048, 00:08:46.091 "data_size": 63488 00:08:46.091 }, 00:08:46.091 { 00:08:46.091 "name": "pt3", 00:08:46.091 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:46.091 "is_configured": true, 00:08:46.091 "data_offset": 2048, 00:08:46.091 "data_size": 63488 00:08:46.091 } 00:08:46.091 ] 00:08:46.091 }' 00:08:46.091 18:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:46.091 18:39:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.351 18:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:46.351 18:39:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.351 18:39:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.351 [2024-12-15 18:39:46.739064] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:46.351 [2024-12-15 18:39:46.739137] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:46.351 [2024-12-15 18:39:46.739252] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:46.351 [2024-12-15 18:39:46.739342] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:46.351 [2024-12-15 18:39:46.739416] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:08:46.351 18:39:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.351 18:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:08:46.351 18:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:46.351 18:39:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.351 18:39:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.351 18:39:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.610 18:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:08:46.610 18:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:08:46.610 18:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:08:46.610 18:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:08:46.610 18:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:08:46.610 18:39:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.610 18:39:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.610 18:39:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.610 18:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:46.610 18:39:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.610 18:39:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.610 [2024-12-15 18:39:46.810925] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:46.610 [2024-12-15 18:39:46.811025] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:46.610 [2024-12-15 18:39:46.811057] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:08:46.610 [2024-12-15 18:39:46.811087] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:46.610 [2024-12-15 18:39:46.813319] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:46.610 [2024-12-15 18:39:46.813399] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:46.610 [2024-12-15 18:39:46.813508] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:46.610 [2024-12-15 18:39:46.813581] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:46.610 [2024-12-15 18:39:46.813717] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:08:46.610 [2024-12-15 18:39:46.813786] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:46.610 [2024-12-15 18:39:46.813830] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state configuring 00:08:46.610 [2024-12-15 18:39:46.813935] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:46.610 pt1 00:08:46.610 18:39:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.610 18:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:08:46.610 18:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:08:46.610 18:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:46.610 18:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:46.610 18:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:46.610 18:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:46.610 18:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:46.610 18:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:46.610 18:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:46.610 18:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:46.610 18:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:46.610 18:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:46.610 18:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:46.610 18:39:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.610 18:39:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.610 18:39:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.610 18:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:46.610 "name": "raid_bdev1", 00:08:46.610 "uuid": "c8c5d1e7-2d14-4412-a203-72ffd00a3069", 00:08:46.610 "strip_size_kb": 0, 00:08:46.610 "state": "configuring", 00:08:46.610 "raid_level": "raid1", 00:08:46.610 "superblock": true, 00:08:46.610 "num_base_bdevs": 3, 00:08:46.610 "num_base_bdevs_discovered": 1, 00:08:46.610 "num_base_bdevs_operational": 2, 00:08:46.610 "base_bdevs_list": [ 00:08:46.610 { 00:08:46.610 "name": null, 00:08:46.610 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:46.610 "is_configured": false, 00:08:46.610 "data_offset": 2048, 00:08:46.610 "data_size": 63488 00:08:46.610 }, 00:08:46.610 { 00:08:46.610 "name": "pt2", 00:08:46.610 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:46.610 "is_configured": true, 00:08:46.610 "data_offset": 2048, 00:08:46.610 "data_size": 63488 00:08:46.610 }, 00:08:46.610 { 00:08:46.610 "name": null, 00:08:46.610 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:46.610 "is_configured": false, 00:08:46.610 "data_offset": 2048, 00:08:46.610 "data_size": 63488 00:08:46.610 } 00:08:46.610 ] 00:08:46.610 }' 00:08:46.610 18:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:46.610 18:39:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.870 18:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:08:46.870 18:39:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.870 18:39:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.870 18:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:08:46.870 18:39:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.870 18:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:08:46.870 18:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:46.870 18:39:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.870 18:39:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.870 [2024-12-15 18:39:47.298136] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:46.870 [2024-12-15 18:39:47.298253] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:46.870 [2024-12-15 18:39:47.298291] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:08:46.870 [2024-12-15 18:39:47.298321] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:46.870 [2024-12-15 18:39:47.298752] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:46.870 [2024-12-15 18:39:47.298826] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:46.870 [2024-12-15 18:39:47.298934] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:08:46.870 [2024-12-15 18:39:47.298988] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:46.870 [2024-12-15 18:39:47.299107] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:08:46.870 [2024-12-15 18:39:47.299145] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:46.870 [2024-12-15 18:39:47.299387] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:08:46.870 [2024-12-15 18:39:47.299547] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:08:46.870 [2024-12-15 18:39:47.299586] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:08:46.870 [2024-12-15 18:39:47.299726] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:46.870 pt3 00:08:46.870 18:39:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.870 18:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:46.870 18:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:46.870 18:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:46.870 18:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:46.870 18:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:46.870 18:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:46.870 18:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:46.870 18:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:46.870 18:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:46.870 18:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:46.870 18:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:46.870 18:39:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.870 18:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:46.870 18:39:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.130 18:39:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.130 18:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:47.130 "name": "raid_bdev1", 00:08:47.130 "uuid": "c8c5d1e7-2d14-4412-a203-72ffd00a3069", 00:08:47.130 "strip_size_kb": 0, 00:08:47.130 "state": "online", 00:08:47.130 "raid_level": "raid1", 00:08:47.130 "superblock": true, 00:08:47.130 "num_base_bdevs": 3, 00:08:47.130 "num_base_bdevs_discovered": 2, 00:08:47.130 "num_base_bdevs_operational": 2, 00:08:47.130 "base_bdevs_list": [ 00:08:47.130 { 00:08:47.130 "name": null, 00:08:47.130 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:47.130 "is_configured": false, 00:08:47.130 "data_offset": 2048, 00:08:47.130 "data_size": 63488 00:08:47.130 }, 00:08:47.130 { 00:08:47.130 "name": "pt2", 00:08:47.130 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:47.130 "is_configured": true, 00:08:47.130 "data_offset": 2048, 00:08:47.130 "data_size": 63488 00:08:47.130 }, 00:08:47.130 { 00:08:47.130 "name": "pt3", 00:08:47.130 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:47.130 "is_configured": true, 00:08:47.130 "data_offset": 2048, 00:08:47.130 "data_size": 63488 00:08:47.130 } 00:08:47.130 ] 00:08:47.130 }' 00:08:47.130 18:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:47.130 18:39:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.390 18:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:08:47.390 18:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:08:47.390 18:39:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.390 18:39:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.390 18:39:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.390 18:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:08:47.390 18:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:47.390 18:39:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.390 18:39:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.390 18:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:08:47.390 [2024-12-15 18:39:47.713682] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:47.390 18:39:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.390 18:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' c8c5d1e7-2d14-4412-a203-72ffd00a3069 '!=' c8c5d1e7-2d14-4412-a203-72ffd00a3069 ']' 00:08:47.390 18:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 81591 00:08:47.390 18:39:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 81591 ']' 00:08:47.390 18:39:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 81591 00:08:47.390 18:39:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:08:47.390 18:39:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:47.390 18:39:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81591 00:08:47.390 18:39:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:47.390 18:39:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:47.390 18:39:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81591' 00:08:47.390 killing process with pid 81591 00:08:47.390 18:39:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 81591 00:08:47.390 [2024-12-15 18:39:47.783375] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:47.390 [2024-12-15 18:39:47.783455] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:47.390 [2024-12-15 18:39:47.783525] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:47.390 [2024-12-15 18:39:47.783535] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:08:47.390 18:39:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 81591 00:08:47.390 [2024-12-15 18:39:47.818124] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:47.650 18:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:47.650 00:08:47.650 real 0m5.967s 00:08:47.650 user 0m9.912s 00:08:47.650 sys 0m1.287s 00:08:47.650 18:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:47.650 ************************************ 00:08:47.650 END TEST raid_superblock_test 00:08:47.650 ************************************ 00:08:47.650 18:39:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.650 18:39:48 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:08:47.650 18:39:48 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:47.650 18:39:48 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:47.650 18:39:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:47.650 ************************************ 00:08:47.650 START TEST raid_read_error_test 00:08:47.650 ************************************ 00:08:47.650 18:39:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 read 00:08:47.909 18:39:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:08:47.909 18:39:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:08:47.909 18:39:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:47.909 18:39:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:47.909 18:39:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:47.909 18:39:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:47.909 18:39:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:47.909 18:39:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:47.909 18:39:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:47.909 18:39:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:47.909 18:39:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:47.909 18:39:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:08:47.909 18:39:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:47.909 18:39:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:47.909 18:39:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:47.909 18:39:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:47.909 18:39:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:47.909 18:39:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:47.909 18:39:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:47.909 18:39:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:47.909 18:39:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:47.909 18:39:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:08:47.909 18:39:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:08:47.909 18:39:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:47.909 18:39:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.xx30t2eohI 00:08:47.909 18:39:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:47.909 18:39:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=82014 00:08:47.909 18:39:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 82014 00:08:47.909 18:39:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 82014 ']' 00:08:47.909 18:39:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:47.909 18:39:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:47.909 18:39:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:47.909 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:47.909 18:39:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:47.909 18:39:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.909 [2024-12-15 18:39:48.186068] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:08:47.909 [2024-12-15 18:39:48.186323] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82014 ] 00:08:48.168 [2024-12-15 18:39:48.368833] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:48.168 [2024-12-15 18:39:48.396223] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:48.168 [2024-12-15 18:39:48.438939] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:48.168 [2024-12-15 18:39:48.438975] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:48.738 18:39:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:48.738 18:39:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:48.738 18:39:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:48.738 18:39:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:48.738 18:39:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.738 18:39:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.738 BaseBdev1_malloc 00:08:48.738 18:39:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.738 18:39:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:48.738 18:39:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.738 18:39:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.738 true 00:08:48.738 18:39:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.738 18:39:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:48.738 18:39:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.738 18:39:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.738 [2024-12-15 18:39:49.091296] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:48.738 [2024-12-15 18:39:49.091415] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:48.738 [2024-12-15 18:39:49.091466] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:48.738 [2024-12-15 18:39:49.091477] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:48.738 [2024-12-15 18:39:49.093949] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:48.738 [2024-12-15 18:39:49.093991] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:48.738 BaseBdev1 00:08:48.738 18:39:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.738 18:39:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:48.738 18:39:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:48.738 18:39:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.738 18:39:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.738 BaseBdev2_malloc 00:08:48.738 18:39:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.738 18:39:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:48.738 18:39:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.738 18:39:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.738 true 00:08:48.738 18:39:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.738 18:39:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:48.738 18:39:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.738 18:39:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.738 [2024-12-15 18:39:49.131952] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:48.738 [2024-12-15 18:39:49.132059] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:48.738 [2024-12-15 18:39:49.132085] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:48.738 [2024-12-15 18:39:49.132095] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:48.738 [2024-12-15 18:39:49.134181] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:48.738 [2024-12-15 18:39:49.134221] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:48.738 BaseBdev2 00:08:48.738 18:39:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.738 18:39:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:48.738 18:39:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:08:48.738 18:39:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.738 18:39:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.738 BaseBdev3_malloc 00:08:48.738 18:39:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.738 18:39:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:08:48.738 18:39:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.738 18:39:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.738 true 00:08:48.739 18:39:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.739 18:39:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:08:48.739 18:39:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.739 18:39:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.739 [2024-12-15 18:39:49.172699] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:08:48.739 [2024-12-15 18:39:49.172827] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:48.739 [2024-12-15 18:39:49.172861] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:48.739 [2024-12-15 18:39:49.172872] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:48.739 [2024-12-15 18:39:49.175070] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:48.739 [2024-12-15 18:39:49.175167] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:08:48.999 BaseBdev3 00:08:48.999 18:39:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.999 18:39:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:08:48.999 18:39:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.999 18:39:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.999 [2024-12-15 18:39:49.184742] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:48.999 [2024-12-15 18:39:49.186623] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:48.999 [2024-12-15 18:39:49.186755] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:48.999 [2024-12-15 18:39:49.186990] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:08:48.999 [2024-12-15 18:39:49.187042] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:48.999 [2024-12-15 18:39:49.187303] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:08:48.999 [2024-12-15 18:39:49.187499] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:08:48.999 [2024-12-15 18:39:49.187541] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:08:48.999 [2024-12-15 18:39:49.187722] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:48.999 18:39:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.999 18:39:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:08:48.999 18:39:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:48.999 18:39:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:48.999 18:39:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:48.999 18:39:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:48.999 18:39:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:48.999 18:39:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:48.999 18:39:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:48.999 18:39:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:48.999 18:39:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:48.999 18:39:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:48.999 18:39:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:48.999 18:39:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.999 18:39:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.999 18:39:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.999 18:39:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:48.999 "name": "raid_bdev1", 00:08:48.999 "uuid": "13bb2a9a-0281-4f40-b749-0b8dc42f2e1a", 00:08:48.999 "strip_size_kb": 0, 00:08:48.999 "state": "online", 00:08:48.999 "raid_level": "raid1", 00:08:48.999 "superblock": true, 00:08:48.999 "num_base_bdevs": 3, 00:08:48.999 "num_base_bdevs_discovered": 3, 00:08:48.999 "num_base_bdevs_operational": 3, 00:08:48.999 "base_bdevs_list": [ 00:08:48.999 { 00:08:48.999 "name": "BaseBdev1", 00:08:48.999 "uuid": "84ed4fdd-2527-5d0b-807b-af6e28f6ee87", 00:08:48.999 "is_configured": true, 00:08:48.999 "data_offset": 2048, 00:08:48.999 "data_size": 63488 00:08:48.999 }, 00:08:48.999 { 00:08:48.999 "name": "BaseBdev2", 00:08:48.999 "uuid": "1beb89e8-0b39-5c78-a82e-6b7153bbc4d1", 00:08:48.999 "is_configured": true, 00:08:48.999 "data_offset": 2048, 00:08:48.999 "data_size": 63488 00:08:48.999 }, 00:08:48.999 { 00:08:48.999 "name": "BaseBdev3", 00:08:48.999 "uuid": "2ec3bf49-34de-5e75-a62e-704963ecb0bd", 00:08:48.999 "is_configured": true, 00:08:48.999 "data_offset": 2048, 00:08:48.999 "data_size": 63488 00:08:48.999 } 00:08:48.999 ] 00:08:48.999 }' 00:08:48.999 18:39:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:48.999 18:39:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.259 18:39:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:49.259 18:39:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:49.519 [2024-12-15 18:39:49.736659] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:08:50.477 18:39:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:50.477 18:39:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.477 18:39:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.477 18:39:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.477 18:39:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:50.477 18:39:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:08:50.477 18:39:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:08:50.477 18:39:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:08:50.477 18:39:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:08:50.477 18:39:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:50.477 18:39:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:50.477 18:39:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:50.477 18:39:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:50.477 18:39:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:50.477 18:39:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:50.477 18:39:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:50.477 18:39:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:50.477 18:39:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:50.477 18:39:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:50.477 18:39:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:50.477 18:39:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.477 18:39:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.477 18:39:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.477 18:39:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:50.477 "name": "raid_bdev1", 00:08:50.477 "uuid": "13bb2a9a-0281-4f40-b749-0b8dc42f2e1a", 00:08:50.477 "strip_size_kb": 0, 00:08:50.477 "state": "online", 00:08:50.477 "raid_level": "raid1", 00:08:50.477 "superblock": true, 00:08:50.477 "num_base_bdevs": 3, 00:08:50.477 "num_base_bdevs_discovered": 3, 00:08:50.477 "num_base_bdevs_operational": 3, 00:08:50.477 "base_bdevs_list": [ 00:08:50.477 { 00:08:50.477 "name": "BaseBdev1", 00:08:50.477 "uuid": "84ed4fdd-2527-5d0b-807b-af6e28f6ee87", 00:08:50.477 "is_configured": true, 00:08:50.477 "data_offset": 2048, 00:08:50.477 "data_size": 63488 00:08:50.477 }, 00:08:50.477 { 00:08:50.477 "name": "BaseBdev2", 00:08:50.477 "uuid": "1beb89e8-0b39-5c78-a82e-6b7153bbc4d1", 00:08:50.477 "is_configured": true, 00:08:50.477 "data_offset": 2048, 00:08:50.477 "data_size": 63488 00:08:50.477 }, 00:08:50.477 { 00:08:50.477 "name": "BaseBdev3", 00:08:50.477 "uuid": "2ec3bf49-34de-5e75-a62e-704963ecb0bd", 00:08:50.477 "is_configured": true, 00:08:50.477 "data_offset": 2048, 00:08:50.477 "data_size": 63488 00:08:50.477 } 00:08:50.477 ] 00:08:50.477 }' 00:08:50.477 18:39:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:50.477 18:39:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.738 18:39:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:50.738 18:39:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.738 18:39:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.738 [2024-12-15 18:39:51.026689] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:50.738 [2024-12-15 18:39:51.026799] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:50.738 [2024-12-15 18:39:51.029600] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:50.738 [2024-12-15 18:39:51.029708] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:50.738 [2024-12-15 18:39:51.029849] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:50.738 [2024-12-15 18:39:51.029906] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:08:50.738 { 00:08:50.738 "results": [ 00:08:50.738 { 00:08:50.738 "job": "raid_bdev1", 00:08:50.738 "core_mask": "0x1", 00:08:50.738 "workload": "randrw", 00:08:50.738 "percentage": 50, 00:08:50.738 "status": "finished", 00:08:50.738 "queue_depth": 1, 00:08:50.738 "io_size": 131072, 00:08:50.738 "runtime": 1.290865, 00:08:50.738 "iops": 13753.568343707513, 00:08:50.738 "mibps": 1719.1960429634391, 00:08:50.738 "io_failed": 0, 00:08:50.738 "io_timeout": 0, 00:08:50.738 "avg_latency_us": 70.01138135793742, 00:08:50.738 "min_latency_us": 22.69344978165939, 00:08:50.738 "max_latency_us": 1380.8349344978167 00:08:50.738 } 00:08:50.738 ], 00:08:50.738 "core_count": 1 00:08:50.738 } 00:08:50.738 18:39:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.738 18:39:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 82014 00:08:50.738 18:39:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 82014 ']' 00:08:50.738 18:39:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 82014 00:08:50.738 18:39:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:08:50.738 18:39:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:50.738 18:39:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82014 00:08:50.738 18:39:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:50.738 killing process with pid 82014 00:08:50.738 18:39:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:50.738 18:39:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82014' 00:08:50.738 18:39:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 82014 00:08:50.738 [2024-12-15 18:39:51.070149] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:50.738 18:39:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 82014 00:08:50.738 [2024-12-15 18:39:51.096750] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:50.998 18:39:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:50.998 18:39:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.xx30t2eohI 00:08:50.998 18:39:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:50.998 18:39:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:08:50.998 18:39:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:08:50.998 18:39:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:50.998 18:39:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:50.998 18:39:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:08:50.998 00:08:50.998 real 0m3.232s 00:08:50.998 user 0m4.082s 00:08:50.998 sys 0m0.538s 00:08:50.998 18:39:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:50.998 18:39:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.998 ************************************ 00:08:50.998 END TEST raid_read_error_test 00:08:50.998 ************************************ 00:08:50.998 18:39:51 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:08:50.998 18:39:51 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:50.998 18:39:51 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:50.998 18:39:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:50.998 ************************************ 00:08:50.998 START TEST raid_write_error_test 00:08:50.998 ************************************ 00:08:50.998 18:39:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 write 00:08:50.998 18:39:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:08:50.998 18:39:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:08:50.998 18:39:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:50.998 18:39:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:50.998 18:39:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:50.998 18:39:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:50.998 18:39:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:50.998 18:39:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:50.998 18:39:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:50.998 18:39:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:50.998 18:39:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:50.998 18:39:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:08:50.998 18:39:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:50.998 18:39:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:50.998 18:39:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:50.998 18:39:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:50.998 18:39:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:50.998 18:39:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:50.998 18:39:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:50.998 18:39:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:50.998 18:39:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:50.998 18:39:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:08:50.998 18:39:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:08:50.998 18:39:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:50.998 18:39:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.aR0gjO4cRp 00:08:50.998 18:39:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=82149 00:08:50.998 18:39:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:50.998 18:39:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 82149 00:08:50.998 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:50.998 18:39:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 82149 ']' 00:08:50.998 18:39:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:50.998 18:39:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:50.998 18:39:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:50.998 18:39:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:50.998 18:39:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.259 [2024-12-15 18:39:51.492397] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:08:51.259 [2024-12-15 18:39:51.492621] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82149 ] 00:08:51.259 [2024-12-15 18:39:51.664689] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:51.259 [2024-12-15 18:39:51.692214] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:51.519 [2024-12-15 18:39:51.736237] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:51.519 [2024-12-15 18:39:51.736274] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:52.089 18:39:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:52.089 18:39:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:52.089 18:39:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:52.089 18:39:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:52.089 18:39:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.089 18:39:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.089 BaseBdev1_malloc 00:08:52.089 18:39:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.089 18:39:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:52.089 18:39:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.089 18:39:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.089 true 00:08:52.089 18:39:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.089 18:39:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:52.089 18:39:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.089 18:39:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.089 [2024-12-15 18:39:52.376437] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:52.089 [2024-12-15 18:39:52.376535] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:52.089 [2024-12-15 18:39:52.376576] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:52.089 [2024-12-15 18:39:52.376607] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:52.089 [2024-12-15 18:39:52.378756] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:52.089 [2024-12-15 18:39:52.378844] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:52.089 BaseBdev1 00:08:52.089 18:39:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.089 18:39:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:52.089 18:39:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:52.089 18:39:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.089 18:39:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.089 BaseBdev2_malloc 00:08:52.089 18:39:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.089 18:39:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:52.089 18:39:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.089 18:39:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.089 true 00:08:52.089 18:39:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.089 18:39:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:52.089 18:39:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.089 18:39:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.089 [2024-12-15 18:39:52.417102] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:52.089 [2024-12-15 18:39:52.417192] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:52.089 [2024-12-15 18:39:52.417230] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:52.089 [2024-12-15 18:39:52.417258] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:52.089 [2024-12-15 18:39:52.419314] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:52.089 [2024-12-15 18:39:52.419385] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:52.089 BaseBdev2 00:08:52.089 18:39:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.089 18:39:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:52.089 18:39:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:08:52.089 18:39:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.089 18:39:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.089 BaseBdev3_malloc 00:08:52.089 18:39:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.089 18:39:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:08:52.089 18:39:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.089 18:39:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.089 true 00:08:52.089 18:39:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.089 18:39:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:08:52.089 18:39:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.089 18:39:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.089 [2024-12-15 18:39:52.457661] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:08:52.089 [2024-12-15 18:39:52.457768] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:52.089 [2024-12-15 18:39:52.457816] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:52.089 [2024-12-15 18:39:52.457851] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:52.089 [2024-12-15 18:39:52.459906] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:52.089 [2024-12-15 18:39:52.459984] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:08:52.089 BaseBdev3 00:08:52.089 18:39:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.089 18:39:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:08:52.089 18:39:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.089 18:39:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.089 [2024-12-15 18:39:52.469696] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:52.089 [2024-12-15 18:39:52.471516] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:52.089 [2024-12-15 18:39:52.471630] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:52.089 [2024-12-15 18:39:52.471822] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:08:52.089 [2024-12-15 18:39:52.471841] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:52.089 [2024-12-15 18:39:52.472068] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:08:52.089 [2024-12-15 18:39:52.472225] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:08:52.089 [2024-12-15 18:39:52.472235] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:08:52.089 [2024-12-15 18:39:52.472380] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:52.089 18:39:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.089 18:39:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:08:52.089 18:39:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:52.089 18:39:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:52.089 18:39:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:52.089 18:39:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:52.089 18:39:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:52.089 18:39:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:52.089 18:39:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:52.089 18:39:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:52.089 18:39:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:52.089 18:39:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:52.089 18:39:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:52.089 18:39:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.089 18:39:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.089 18:39:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.349 18:39:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:52.349 "name": "raid_bdev1", 00:08:52.349 "uuid": "5ff9c037-c46f-458e-b8b8-746c5774181c", 00:08:52.349 "strip_size_kb": 0, 00:08:52.349 "state": "online", 00:08:52.349 "raid_level": "raid1", 00:08:52.349 "superblock": true, 00:08:52.349 "num_base_bdevs": 3, 00:08:52.349 "num_base_bdevs_discovered": 3, 00:08:52.349 "num_base_bdevs_operational": 3, 00:08:52.349 "base_bdevs_list": [ 00:08:52.349 { 00:08:52.349 "name": "BaseBdev1", 00:08:52.349 "uuid": "821fd0b8-1c08-5655-8ccd-b5568d947c6a", 00:08:52.349 "is_configured": true, 00:08:52.349 "data_offset": 2048, 00:08:52.349 "data_size": 63488 00:08:52.349 }, 00:08:52.349 { 00:08:52.349 "name": "BaseBdev2", 00:08:52.349 "uuid": "618fb408-a528-59e5-aa18-81ebb0594a1f", 00:08:52.349 "is_configured": true, 00:08:52.349 "data_offset": 2048, 00:08:52.349 "data_size": 63488 00:08:52.349 }, 00:08:52.349 { 00:08:52.349 "name": "BaseBdev3", 00:08:52.349 "uuid": "0f124029-fddf-532f-a945-a65fee213a1c", 00:08:52.349 "is_configured": true, 00:08:52.349 "data_offset": 2048, 00:08:52.349 "data_size": 63488 00:08:52.349 } 00:08:52.349 ] 00:08:52.349 }' 00:08:52.349 18:39:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:52.349 18:39:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.609 18:39:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:52.609 18:39:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:52.609 [2024-12-15 18:39:52.957281] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:08:53.547 18:39:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:53.547 18:39:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.547 18:39:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.547 [2024-12-15 18:39:53.876696] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:08:53.547 [2024-12-15 18:39:53.876840] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:53.547 [2024-12-15 18:39:53.877112] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006560 00:08:53.547 18:39:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.547 18:39:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:53.547 18:39:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:08:53.547 18:39:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:08:53.547 18:39:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:08:53.547 18:39:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:53.547 18:39:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:53.548 18:39:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:53.548 18:39:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:53.548 18:39:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:53.548 18:39:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:53.548 18:39:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:53.548 18:39:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:53.548 18:39:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:53.548 18:39:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:53.548 18:39:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:53.548 18:39:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:53.548 18:39:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.548 18:39:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.548 18:39:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.548 18:39:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:53.548 "name": "raid_bdev1", 00:08:53.548 "uuid": "5ff9c037-c46f-458e-b8b8-746c5774181c", 00:08:53.548 "strip_size_kb": 0, 00:08:53.548 "state": "online", 00:08:53.548 "raid_level": "raid1", 00:08:53.548 "superblock": true, 00:08:53.548 "num_base_bdevs": 3, 00:08:53.548 "num_base_bdevs_discovered": 2, 00:08:53.548 "num_base_bdevs_operational": 2, 00:08:53.548 "base_bdevs_list": [ 00:08:53.548 { 00:08:53.548 "name": null, 00:08:53.548 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:53.548 "is_configured": false, 00:08:53.548 "data_offset": 0, 00:08:53.548 "data_size": 63488 00:08:53.548 }, 00:08:53.548 { 00:08:53.548 "name": "BaseBdev2", 00:08:53.548 "uuid": "618fb408-a528-59e5-aa18-81ebb0594a1f", 00:08:53.548 "is_configured": true, 00:08:53.548 "data_offset": 2048, 00:08:53.548 "data_size": 63488 00:08:53.548 }, 00:08:53.548 { 00:08:53.548 "name": "BaseBdev3", 00:08:53.548 "uuid": "0f124029-fddf-532f-a945-a65fee213a1c", 00:08:53.548 "is_configured": true, 00:08:53.548 "data_offset": 2048, 00:08:53.548 "data_size": 63488 00:08:53.548 } 00:08:53.548 ] 00:08:53.548 }' 00:08:53.548 18:39:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:53.548 18:39:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.118 18:39:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:54.118 18:39:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.118 18:39:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.118 [2024-12-15 18:39:54.286573] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:54.118 [2024-12-15 18:39:54.286654] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:54.118 [2024-12-15 18:39:54.289134] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:54.118 [2024-12-15 18:39:54.289214] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:54.118 [2024-12-15 18:39:54.289328] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:54.118 [2024-12-15 18:39:54.289403] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:08:54.118 { 00:08:54.118 "results": [ 00:08:54.118 { 00:08:54.118 "job": "raid_bdev1", 00:08:54.118 "core_mask": "0x1", 00:08:54.118 "workload": "randrw", 00:08:54.118 "percentage": 50, 00:08:54.118 "status": "finished", 00:08:54.118 "queue_depth": 1, 00:08:54.118 "io_size": 131072, 00:08:54.118 "runtime": 1.330112, 00:08:54.118 "iops": 15510.723908964057, 00:08:54.118 "mibps": 1938.8404886205071, 00:08:54.118 "io_failed": 0, 00:08:54.118 "io_timeout": 0, 00:08:54.118 "avg_latency_us": 61.779989962956925, 00:08:54.118 "min_latency_us": 24.258515283842794, 00:08:54.118 "max_latency_us": 1359.3711790393013 00:08:54.118 } 00:08:54.118 ], 00:08:54.118 "core_count": 1 00:08:54.118 } 00:08:54.118 18:39:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.118 18:39:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 82149 00:08:54.118 18:39:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 82149 ']' 00:08:54.118 18:39:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 82149 00:08:54.118 18:39:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:08:54.118 18:39:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:54.118 18:39:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82149 00:08:54.118 killing process with pid 82149 00:08:54.118 18:39:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:54.118 18:39:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:54.118 18:39:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82149' 00:08:54.118 18:39:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 82149 00:08:54.118 [2024-12-15 18:39:54.330913] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:54.118 18:39:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 82149 00:08:54.118 [2024-12-15 18:39:54.357317] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:54.118 18:39:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.aR0gjO4cRp 00:08:54.118 18:39:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:54.118 18:39:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:54.378 18:39:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:08:54.378 18:39:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:08:54.378 18:39:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:54.378 18:39:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:54.378 ************************************ 00:08:54.378 END TEST raid_write_error_test 00:08:54.378 ************************************ 00:08:54.378 18:39:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:08:54.378 00:08:54.378 real 0m3.180s 00:08:54.378 user 0m3.977s 00:08:54.378 sys 0m0.544s 00:08:54.378 18:39:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:54.378 18:39:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.378 18:39:54 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:08:54.378 18:39:54 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:54.378 18:39:54 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:08:54.378 18:39:54 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:54.378 18:39:54 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:54.378 18:39:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:54.378 ************************************ 00:08:54.378 START TEST raid_state_function_test 00:08:54.378 ************************************ 00:08:54.378 18:39:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 false 00:08:54.378 18:39:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:08:54.379 18:39:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:08:54.379 18:39:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:54.379 18:39:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:54.379 18:39:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:54.379 18:39:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:54.379 18:39:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:54.379 18:39:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:54.379 18:39:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:54.379 18:39:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:54.379 18:39:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:54.379 18:39:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:54.379 18:39:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:54.379 18:39:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:54.379 18:39:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:54.379 18:39:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:08:54.379 18:39:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:54.379 18:39:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:54.379 18:39:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:08:54.379 18:39:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:54.379 18:39:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:54.379 18:39:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:54.379 18:39:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:54.379 18:39:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:54.379 18:39:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:08:54.379 18:39:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:54.379 18:39:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:54.379 18:39:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:54.379 18:39:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:54.379 18:39:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=82276 00:08:54.379 18:39:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:54.379 18:39:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 82276' 00:08:54.379 Process raid pid: 82276 00:08:54.379 18:39:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 82276 00:08:54.379 18:39:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 82276 ']' 00:08:54.379 18:39:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:54.379 18:39:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:54.379 18:39:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:54.379 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:54.379 18:39:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:54.379 18:39:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.379 [2024-12-15 18:39:54.739669] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:08:54.379 [2024-12-15 18:39:54.739951] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:54.637 [2024-12-15 18:39:54.914296] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:54.637 [2024-12-15 18:39:54.940770] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:54.637 [2024-12-15 18:39:54.983361] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:54.637 [2024-12-15 18:39:54.983414] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:55.204 18:39:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:55.204 18:39:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:08:55.204 18:39:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:08:55.204 18:39:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.204 18:39:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.204 [2024-12-15 18:39:55.566344] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:55.204 [2024-12-15 18:39:55.566405] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:55.204 [2024-12-15 18:39:55.566418] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:55.204 [2024-12-15 18:39:55.566427] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:55.204 [2024-12-15 18:39:55.566433] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:55.204 [2024-12-15 18:39:55.566444] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:55.204 [2024-12-15 18:39:55.566450] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:08:55.204 [2024-12-15 18:39:55.566458] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:08:55.204 18:39:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.204 18:39:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:08:55.204 18:39:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:55.204 18:39:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:55.204 18:39:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:55.204 18:39:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:55.204 18:39:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:55.204 18:39:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:55.204 18:39:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:55.204 18:39:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:55.204 18:39:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:55.204 18:39:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:55.204 18:39:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.204 18:39:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:55.204 18:39:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.204 18:39:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.204 18:39:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:55.204 "name": "Existed_Raid", 00:08:55.204 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:55.204 "strip_size_kb": 64, 00:08:55.204 "state": "configuring", 00:08:55.204 "raid_level": "raid0", 00:08:55.204 "superblock": false, 00:08:55.204 "num_base_bdevs": 4, 00:08:55.204 "num_base_bdevs_discovered": 0, 00:08:55.204 "num_base_bdevs_operational": 4, 00:08:55.204 "base_bdevs_list": [ 00:08:55.204 { 00:08:55.204 "name": "BaseBdev1", 00:08:55.204 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:55.204 "is_configured": false, 00:08:55.204 "data_offset": 0, 00:08:55.204 "data_size": 0 00:08:55.204 }, 00:08:55.204 { 00:08:55.204 "name": "BaseBdev2", 00:08:55.204 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:55.204 "is_configured": false, 00:08:55.204 "data_offset": 0, 00:08:55.204 "data_size": 0 00:08:55.204 }, 00:08:55.204 { 00:08:55.204 "name": "BaseBdev3", 00:08:55.204 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:55.204 "is_configured": false, 00:08:55.204 "data_offset": 0, 00:08:55.204 "data_size": 0 00:08:55.204 }, 00:08:55.204 { 00:08:55.204 "name": "BaseBdev4", 00:08:55.204 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:55.204 "is_configured": false, 00:08:55.204 "data_offset": 0, 00:08:55.204 "data_size": 0 00:08:55.204 } 00:08:55.204 ] 00:08:55.204 }' 00:08:55.204 18:39:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:55.204 18:39:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.774 18:39:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:55.774 18:39:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.774 18:39:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.774 [2024-12-15 18:39:55.973597] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:55.774 [2024-12-15 18:39:55.973695] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:08:55.774 18:39:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.774 18:39:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:08:55.774 18:39:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.774 18:39:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.774 [2024-12-15 18:39:55.985576] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:55.774 [2024-12-15 18:39:55.985656] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:55.774 [2024-12-15 18:39:55.985685] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:55.774 [2024-12-15 18:39:55.985709] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:55.774 [2024-12-15 18:39:55.985727] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:55.774 [2024-12-15 18:39:55.985748] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:55.774 [2024-12-15 18:39:55.985766] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:08:55.774 [2024-12-15 18:39:55.985787] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:08:55.774 18:39:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.774 18:39:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:55.774 18:39:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.774 18:39:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.774 [2024-12-15 18:39:56.006759] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:55.774 BaseBdev1 00:08:55.774 18:39:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.774 18:39:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:55.774 18:39:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:55.774 18:39:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:55.774 18:39:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:55.774 18:39:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:55.774 18:39:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:55.774 18:39:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:55.774 18:39:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.774 18:39:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.774 18:39:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.774 18:39:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:55.774 18:39:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.774 18:39:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.774 [ 00:08:55.774 { 00:08:55.774 "name": "BaseBdev1", 00:08:55.774 "aliases": [ 00:08:55.774 "ca3ce218-b85b-48a9-a559-0a629f12c1a0" 00:08:55.774 ], 00:08:55.774 "product_name": "Malloc disk", 00:08:55.774 "block_size": 512, 00:08:55.774 "num_blocks": 65536, 00:08:55.774 "uuid": "ca3ce218-b85b-48a9-a559-0a629f12c1a0", 00:08:55.774 "assigned_rate_limits": { 00:08:55.774 "rw_ios_per_sec": 0, 00:08:55.774 "rw_mbytes_per_sec": 0, 00:08:55.774 "r_mbytes_per_sec": 0, 00:08:55.774 "w_mbytes_per_sec": 0 00:08:55.774 }, 00:08:55.774 "claimed": true, 00:08:55.774 "claim_type": "exclusive_write", 00:08:55.774 "zoned": false, 00:08:55.774 "supported_io_types": { 00:08:55.774 "read": true, 00:08:55.774 "write": true, 00:08:55.774 "unmap": true, 00:08:55.774 "flush": true, 00:08:55.774 "reset": true, 00:08:55.774 "nvme_admin": false, 00:08:55.774 "nvme_io": false, 00:08:55.774 "nvme_io_md": false, 00:08:55.774 "write_zeroes": true, 00:08:55.774 "zcopy": true, 00:08:55.774 "get_zone_info": false, 00:08:55.774 "zone_management": false, 00:08:55.774 "zone_append": false, 00:08:55.774 "compare": false, 00:08:55.774 "compare_and_write": false, 00:08:55.774 "abort": true, 00:08:55.774 "seek_hole": false, 00:08:55.774 "seek_data": false, 00:08:55.774 "copy": true, 00:08:55.774 "nvme_iov_md": false 00:08:55.774 }, 00:08:55.774 "memory_domains": [ 00:08:55.774 { 00:08:55.774 "dma_device_id": "system", 00:08:55.774 "dma_device_type": 1 00:08:55.774 }, 00:08:55.774 { 00:08:55.774 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:55.774 "dma_device_type": 2 00:08:55.774 } 00:08:55.774 ], 00:08:55.774 "driver_specific": {} 00:08:55.774 } 00:08:55.774 ] 00:08:55.774 18:39:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.774 18:39:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:55.774 18:39:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:08:55.774 18:39:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:55.774 18:39:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:55.774 18:39:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:55.774 18:39:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:55.774 18:39:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:55.774 18:39:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:55.774 18:39:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:55.774 18:39:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:55.774 18:39:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:55.774 18:39:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:55.774 18:39:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:55.774 18:39:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.774 18:39:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.774 18:39:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.774 18:39:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:55.774 "name": "Existed_Raid", 00:08:55.774 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:55.774 "strip_size_kb": 64, 00:08:55.774 "state": "configuring", 00:08:55.774 "raid_level": "raid0", 00:08:55.774 "superblock": false, 00:08:55.774 "num_base_bdevs": 4, 00:08:55.774 "num_base_bdevs_discovered": 1, 00:08:55.774 "num_base_bdevs_operational": 4, 00:08:55.774 "base_bdevs_list": [ 00:08:55.774 { 00:08:55.774 "name": "BaseBdev1", 00:08:55.774 "uuid": "ca3ce218-b85b-48a9-a559-0a629f12c1a0", 00:08:55.774 "is_configured": true, 00:08:55.774 "data_offset": 0, 00:08:55.774 "data_size": 65536 00:08:55.774 }, 00:08:55.774 { 00:08:55.774 "name": "BaseBdev2", 00:08:55.774 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:55.775 "is_configured": false, 00:08:55.775 "data_offset": 0, 00:08:55.775 "data_size": 0 00:08:55.775 }, 00:08:55.775 { 00:08:55.775 "name": "BaseBdev3", 00:08:55.775 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:55.775 "is_configured": false, 00:08:55.775 "data_offset": 0, 00:08:55.775 "data_size": 0 00:08:55.775 }, 00:08:55.775 { 00:08:55.775 "name": "BaseBdev4", 00:08:55.775 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:55.775 "is_configured": false, 00:08:55.775 "data_offset": 0, 00:08:55.775 "data_size": 0 00:08:55.775 } 00:08:55.775 ] 00:08:55.775 }' 00:08:55.775 18:39:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:55.775 18:39:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.345 18:39:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:56.345 18:39:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.345 18:39:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.345 [2024-12-15 18:39:56.533926] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:56.345 [2024-12-15 18:39:56.534025] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:08:56.345 18:39:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.345 18:39:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:08:56.345 18:39:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.345 18:39:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.345 [2024-12-15 18:39:56.545918] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:56.345 [2024-12-15 18:39:56.547812] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:56.345 [2024-12-15 18:39:56.547884] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:56.345 [2024-12-15 18:39:56.547912] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:56.345 [2024-12-15 18:39:56.547934] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:56.345 [2024-12-15 18:39:56.547952] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:08:56.345 [2024-12-15 18:39:56.547971] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:08:56.345 18:39:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.345 18:39:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:56.345 18:39:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:56.345 18:39:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:08:56.345 18:39:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:56.345 18:39:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:56.345 18:39:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:56.345 18:39:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:56.345 18:39:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:56.345 18:39:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:56.345 18:39:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:56.345 18:39:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:56.345 18:39:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:56.345 18:39:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:56.345 18:39:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:56.345 18:39:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.345 18:39:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.345 18:39:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.345 18:39:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:56.345 "name": "Existed_Raid", 00:08:56.345 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:56.345 "strip_size_kb": 64, 00:08:56.345 "state": "configuring", 00:08:56.345 "raid_level": "raid0", 00:08:56.345 "superblock": false, 00:08:56.345 "num_base_bdevs": 4, 00:08:56.345 "num_base_bdevs_discovered": 1, 00:08:56.345 "num_base_bdevs_operational": 4, 00:08:56.345 "base_bdevs_list": [ 00:08:56.345 { 00:08:56.345 "name": "BaseBdev1", 00:08:56.345 "uuid": "ca3ce218-b85b-48a9-a559-0a629f12c1a0", 00:08:56.345 "is_configured": true, 00:08:56.345 "data_offset": 0, 00:08:56.345 "data_size": 65536 00:08:56.345 }, 00:08:56.345 { 00:08:56.345 "name": "BaseBdev2", 00:08:56.345 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:56.345 "is_configured": false, 00:08:56.345 "data_offset": 0, 00:08:56.345 "data_size": 0 00:08:56.345 }, 00:08:56.345 { 00:08:56.345 "name": "BaseBdev3", 00:08:56.345 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:56.345 "is_configured": false, 00:08:56.345 "data_offset": 0, 00:08:56.345 "data_size": 0 00:08:56.345 }, 00:08:56.345 { 00:08:56.345 "name": "BaseBdev4", 00:08:56.345 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:56.345 "is_configured": false, 00:08:56.345 "data_offset": 0, 00:08:56.345 "data_size": 0 00:08:56.345 } 00:08:56.345 ] 00:08:56.345 }' 00:08:56.345 18:39:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:56.345 18:39:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.606 18:39:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:56.606 18:39:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.606 18:39:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.606 [2024-12-15 18:39:57.044197] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:56.606 BaseBdev2 00:08:56.866 18:39:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.866 18:39:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:56.866 18:39:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:56.866 18:39:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:56.866 18:39:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:56.866 18:39:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:56.866 18:39:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:56.866 18:39:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:56.866 18:39:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.866 18:39:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.866 18:39:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.866 18:39:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:56.866 18:39:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.866 18:39:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.866 [ 00:08:56.866 { 00:08:56.866 "name": "BaseBdev2", 00:08:56.866 "aliases": [ 00:08:56.866 "dc56f4c3-86a0-4db4-b47f-0a46623313e6" 00:08:56.866 ], 00:08:56.866 "product_name": "Malloc disk", 00:08:56.866 "block_size": 512, 00:08:56.866 "num_blocks": 65536, 00:08:56.866 "uuid": "dc56f4c3-86a0-4db4-b47f-0a46623313e6", 00:08:56.866 "assigned_rate_limits": { 00:08:56.866 "rw_ios_per_sec": 0, 00:08:56.866 "rw_mbytes_per_sec": 0, 00:08:56.866 "r_mbytes_per_sec": 0, 00:08:56.866 "w_mbytes_per_sec": 0 00:08:56.866 }, 00:08:56.866 "claimed": true, 00:08:56.866 "claim_type": "exclusive_write", 00:08:56.866 "zoned": false, 00:08:56.866 "supported_io_types": { 00:08:56.866 "read": true, 00:08:56.866 "write": true, 00:08:56.866 "unmap": true, 00:08:56.866 "flush": true, 00:08:56.867 "reset": true, 00:08:56.867 "nvme_admin": false, 00:08:56.867 "nvme_io": false, 00:08:56.867 "nvme_io_md": false, 00:08:56.867 "write_zeroes": true, 00:08:56.867 "zcopy": true, 00:08:56.867 "get_zone_info": false, 00:08:56.867 "zone_management": false, 00:08:56.867 "zone_append": false, 00:08:56.867 "compare": false, 00:08:56.867 "compare_and_write": false, 00:08:56.867 "abort": true, 00:08:56.867 "seek_hole": false, 00:08:56.867 "seek_data": false, 00:08:56.867 "copy": true, 00:08:56.867 "nvme_iov_md": false 00:08:56.867 }, 00:08:56.867 "memory_domains": [ 00:08:56.867 { 00:08:56.867 "dma_device_id": "system", 00:08:56.867 "dma_device_type": 1 00:08:56.867 }, 00:08:56.867 { 00:08:56.867 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:56.867 "dma_device_type": 2 00:08:56.867 } 00:08:56.867 ], 00:08:56.867 "driver_specific": {} 00:08:56.867 } 00:08:56.867 ] 00:08:56.867 18:39:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.867 18:39:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:56.867 18:39:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:56.867 18:39:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:56.867 18:39:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:08:56.867 18:39:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:56.867 18:39:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:56.867 18:39:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:56.867 18:39:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:56.867 18:39:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:56.867 18:39:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:56.867 18:39:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:56.867 18:39:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:56.867 18:39:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:56.867 18:39:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:56.867 18:39:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:56.867 18:39:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.867 18:39:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.867 18:39:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.867 18:39:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:56.867 "name": "Existed_Raid", 00:08:56.867 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:56.867 "strip_size_kb": 64, 00:08:56.867 "state": "configuring", 00:08:56.867 "raid_level": "raid0", 00:08:56.867 "superblock": false, 00:08:56.867 "num_base_bdevs": 4, 00:08:56.867 "num_base_bdevs_discovered": 2, 00:08:56.867 "num_base_bdevs_operational": 4, 00:08:56.867 "base_bdevs_list": [ 00:08:56.867 { 00:08:56.867 "name": "BaseBdev1", 00:08:56.867 "uuid": "ca3ce218-b85b-48a9-a559-0a629f12c1a0", 00:08:56.867 "is_configured": true, 00:08:56.867 "data_offset": 0, 00:08:56.867 "data_size": 65536 00:08:56.867 }, 00:08:56.867 { 00:08:56.867 "name": "BaseBdev2", 00:08:56.867 "uuid": "dc56f4c3-86a0-4db4-b47f-0a46623313e6", 00:08:56.867 "is_configured": true, 00:08:56.867 "data_offset": 0, 00:08:56.867 "data_size": 65536 00:08:56.867 }, 00:08:56.867 { 00:08:56.867 "name": "BaseBdev3", 00:08:56.867 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:56.867 "is_configured": false, 00:08:56.867 "data_offset": 0, 00:08:56.867 "data_size": 0 00:08:56.867 }, 00:08:56.867 { 00:08:56.867 "name": "BaseBdev4", 00:08:56.867 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:56.867 "is_configured": false, 00:08:56.867 "data_offset": 0, 00:08:56.867 "data_size": 0 00:08:56.867 } 00:08:56.867 ] 00:08:56.867 }' 00:08:56.867 18:39:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:56.867 18:39:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.127 18:39:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:57.127 18:39:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.127 18:39:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.127 [2024-12-15 18:39:57.551944] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:57.127 BaseBdev3 00:08:57.127 18:39:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.127 18:39:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:57.127 18:39:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:57.127 18:39:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:57.127 18:39:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:57.127 18:39:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:57.127 18:39:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:57.127 18:39:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:57.127 18:39:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.127 18:39:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.127 18:39:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.127 18:39:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:57.127 18:39:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.127 18:39:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.387 [ 00:08:57.387 { 00:08:57.387 "name": "BaseBdev3", 00:08:57.387 "aliases": [ 00:08:57.387 "2f6243f8-5746-4074-b4c0-3f500da2e757" 00:08:57.387 ], 00:08:57.387 "product_name": "Malloc disk", 00:08:57.387 "block_size": 512, 00:08:57.387 "num_blocks": 65536, 00:08:57.387 "uuid": "2f6243f8-5746-4074-b4c0-3f500da2e757", 00:08:57.387 "assigned_rate_limits": { 00:08:57.387 "rw_ios_per_sec": 0, 00:08:57.387 "rw_mbytes_per_sec": 0, 00:08:57.387 "r_mbytes_per_sec": 0, 00:08:57.387 "w_mbytes_per_sec": 0 00:08:57.387 }, 00:08:57.387 "claimed": true, 00:08:57.387 "claim_type": "exclusive_write", 00:08:57.387 "zoned": false, 00:08:57.387 "supported_io_types": { 00:08:57.387 "read": true, 00:08:57.387 "write": true, 00:08:57.387 "unmap": true, 00:08:57.387 "flush": true, 00:08:57.387 "reset": true, 00:08:57.387 "nvme_admin": false, 00:08:57.387 "nvme_io": false, 00:08:57.387 "nvme_io_md": false, 00:08:57.387 "write_zeroes": true, 00:08:57.387 "zcopy": true, 00:08:57.387 "get_zone_info": false, 00:08:57.387 "zone_management": false, 00:08:57.387 "zone_append": false, 00:08:57.387 "compare": false, 00:08:57.387 "compare_and_write": false, 00:08:57.387 "abort": true, 00:08:57.387 "seek_hole": false, 00:08:57.387 "seek_data": false, 00:08:57.387 "copy": true, 00:08:57.387 "nvme_iov_md": false 00:08:57.387 }, 00:08:57.387 "memory_domains": [ 00:08:57.387 { 00:08:57.387 "dma_device_id": "system", 00:08:57.387 "dma_device_type": 1 00:08:57.387 }, 00:08:57.387 { 00:08:57.387 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:57.387 "dma_device_type": 2 00:08:57.387 } 00:08:57.387 ], 00:08:57.387 "driver_specific": {} 00:08:57.387 } 00:08:57.387 ] 00:08:57.387 18:39:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.387 18:39:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:57.387 18:39:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:57.387 18:39:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:57.387 18:39:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:08:57.387 18:39:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:57.387 18:39:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:57.387 18:39:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:57.387 18:39:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:57.387 18:39:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:57.387 18:39:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:57.387 18:39:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:57.387 18:39:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:57.387 18:39:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:57.387 18:39:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:57.387 18:39:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:57.387 18:39:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.387 18:39:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.387 18:39:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.387 18:39:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:57.387 "name": "Existed_Raid", 00:08:57.387 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:57.387 "strip_size_kb": 64, 00:08:57.387 "state": "configuring", 00:08:57.387 "raid_level": "raid0", 00:08:57.387 "superblock": false, 00:08:57.387 "num_base_bdevs": 4, 00:08:57.387 "num_base_bdevs_discovered": 3, 00:08:57.387 "num_base_bdevs_operational": 4, 00:08:57.387 "base_bdevs_list": [ 00:08:57.387 { 00:08:57.387 "name": "BaseBdev1", 00:08:57.387 "uuid": "ca3ce218-b85b-48a9-a559-0a629f12c1a0", 00:08:57.387 "is_configured": true, 00:08:57.387 "data_offset": 0, 00:08:57.387 "data_size": 65536 00:08:57.387 }, 00:08:57.387 { 00:08:57.387 "name": "BaseBdev2", 00:08:57.387 "uuid": "dc56f4c3-86a0-4db4-b47f-0a46623313e6", 00:08:57.387 "is_configured": true, 00:08:57.387 "data_offset": 0, 00:08:57.387 "data_size": 65536 00:08:57.387 }, 00:08:57.387 { 00:08:57.387 "name": "BaseBdev3", 00:08:57.387 "uuid": "2f6243f8-5746-4074-b4c0-3f500da2e757", 00:08:57.387 "is_configured": true, 00:08:57.387 "data_offset": 0, 00:08:57.387 "data_size": 65536 00:08:57.387 }, 00:08:57.387 { 00:08:57.387 "name": "BaseBdev4", 00:08:57.387 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:57.387 "is_configured": false, 00:08:57.387 "data_offset": 0, 00:08:57.387 "data_size": 0 00:08:57.387 } 00:08:57.387 ] 00:08:57.387 }' 00:08:57.387 18:39:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:57.387 18:39:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.648 18:39:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:08:57.648 18:39:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.648 18:39:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.648 [2024-12-15 18:39:58.018295] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:08:57.648 [2024-12-15 18:39:58.018342] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:08:57.648 [2024-12-15 18:39:58.018359] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:08:57.648 [2024-12-15 18:39:58.018641] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:57.648 [2024-12-15 18:39:58.018779] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:08:57.648 [2024-12-15 18:39:58.018792] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:08:57.648 [2024-12-15 18:39:58.019013] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:57.648 BaseBdev4 00:08:57.648 18:39:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.648 18:39:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:08:57.648 18:39:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:08:57.648 18:39:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:57.648 18:39:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:57.648 18:39:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:57.648 18:39:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:57.648 18:39:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:57.648 18:39:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.648 18:39:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.648 18:39:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.648 18:39:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:08:57.648 18:39:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.648 18:39:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.648 [ 00:08:57.648 { 00:08:57.648 "name": "BaseBdev4", 00:08:57.648 "aliases": [ 00:08:57.648 "e6bdc174-9e81-4032-8a78-6ff572f6a054" 00:08:57.648 ], 00:08:57.648 "product_name": "Malloc disk", 00:08:57.648 "block_size": 512, 00:08:57.648 "num_blocks": 65536, 00:08:57.648 "uuid": "e6bdc174-9e81-4032-8a78-6ff572f6a054", 00:08:57.648 "assigned_rate_limits": { 00:08:57.648 "rw_ios_per_sec": 0, 00:08:57.648 "rw_mbytes_per_sec": 0, 00:08:57.648 "r_mbytes_per_sec": 0, 00:08:57.648 "w_mbytes_per_sec": 0 00:08:57.648 }, 00:08:57.648 "claimed": true, 00:08:57.648 "claim_type": "exclusive_write", 00:08:57.648 "zoned": false, 00:08:57.648 "supported_io_types": { 00:08:57.648 "read": true, 00:08:57.648 "write": true, 00:08:57.648 "unmap": true, 00:08:57.648 "flush": true, 00:08:57.648 "reset": true, 00:08:57.648 "nvme_admin": false, 00:08:57.648 "nvme_io": false, 00:08:57.648 "nvme_io_md": false, 00:08:57.648 "write_zeroes": true, 00:08:57.648 "zcopy": true, 00:08:57.648 "get_zone_info": false, 00:08:57.648 "zone_management": false, 00:08:57.648 "zone_append": false, 00:08:57.648 "compare": false, 00:08:57.648 "compare_and_write": false, 00:08:57.648 "abort": true, 00:08:57.648 "seek_hole": false, 00:08:57.648 "seek_data": false, 00:08:57.648 "copy": true, 00:08:57.648 "nvme_iov_md": false 00:08:57.648 }, 00:08:57.648 "memory_domains": [ 00:08:57.648 { 00:08:57.648 "dma_device_id": "system", 00:08:57.648 "dma_device_type": 1 00:08:57.648 }, 00:08:57.648 { 00:08:57.648 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:57.648 "dma_device_type": 2 00:08:57.648 } 00:08:57.648 ], 00:08:57.648 "driver_specific": {} 00:08:57.648 } 00:08:57.648 ] 00:08:57.648 18:39:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.648 18:39:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:57.648 18:39:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:57.648 18:39:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:57.648 18:39:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:08:57.648 18:39:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:57.648 18:39:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:57.648 18:39:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:57.648 18:39:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:57.648 18:39:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:57.648 18:39:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:57.648 18:39:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:57.648 18:39:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:57.648 18:39:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:57.648 18:39:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:57.648 18:39:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:57.648 18:39:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.648 18:39:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.649 18:39:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.908 18:39:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:57.908 "name": "Existed_Raid", 00:08:57.908 "uuid": "43e06db8-76c9-437e-b7ec-71f5688f5576", 00:08:57.908 "strip_size_kb": 64, 00:08:57.908 "state": "online", 00:08:57.908 "raid_level": "raid0", 00:08:57.908 "superblock": false, 00:08:57.908 "num_base_bdevs": 4, 00:08:57.908 "num_base_bdevs_discovered": 4, 00:08:57.908 "num_base_bdevs_operational": 4, 00:08:57.908 "base_bdevs_list": [ 00:08:57.908 { 00:08:57.908 "name": "BaseBdev1", 00:08:57.908 "uuid": "ca3ce218-b85b-48a9-a559-0a629f12c1a0", 00:08:57.908 "is_configured": true, 00:08:57.908 "data_offset": 0, 00:08:57.908 "data_size": 65536 00:08:57.908 }, 00:08:57.908 { 00:08:57.908 "name": "BaseBdev2", 00:08:57.908 "uuid": "dc56f4c3-86a0-4db4-b47f-0a46623313e6", 00:08:57.908 "is_configured": true, 00:08:57.908 "data_offset": 0, 00:08:57.908 "data_size": 65536 00:08:57.908 }, 00:08:57.908 { 00:08:57.908 "name": "BaseBdev3", 00:08:57.908 "uuid": "2f6243f8-5746-4074-b4c0-3f500da2e757", 00:08:57.908 "is_configured": true, 00:08:57.908 "data_offset": 0, 00:08:57.908 "data_size": 65536 00:08:57.908 }, 00:08:57.908 { 00:08:57.908 "name": "BaseBdev4", 00:08:57.908 "uuid": "e6bdc174-9e81-4032-8a78-6ff572f6a054", 00:08:57.908 "is_configured": true, 00:08:57.908 "data_offset": 0, 00:08:57.908 "data_size": 65536 00:08:57.908 } 00:08:57.908 ] 00:08:57.908 }' 00:08:57.908 18:39:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:57.908 18:39:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.168 18:39:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:58.168 18:39:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:58.168 18:39:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:58.168 18:39:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:58.168 18:39:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:58.168 18:39:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:58.168 18:39:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:58.168 18:39:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:58.168 18:39:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.168 18:39:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.168 [2024-12-15 18:39:58.497949] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:58.169 18:39:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.169 18:39:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:58.169 "name": "Existed_Raid", 00:08:58.169 "aliases": [ 00:08:58.169 "43e06db8-76c9-437e-b7ec-71f5688f5576" 00:08:58.169 ], 00:08:58.169 "product_name": "Raid Volume", 00:08:58.169 "block_size": 512, 00:08:58.169 "num_blocks": 262144, 00:08:58.169 "uuid": "43e06db8-76c9-437e-b7ec-71f5688f5576", 00:08:58.169 "assigned_rate_limits": { 00:08:58.169 "rw_ios_per_sec": 0, 00:08:58.169 "rw_mbytes_per_sec": 0, 00:08:58.169 "r_mbytes_per_sec": 0, 00:08:58.169 "w_mbytes_per_sec": 0 00:08:58.169 }, 00:08:58.169 "claimed": false, 00:08:58.169 "zoned": false, 00:08:58.169 "supported_io_types": { 00:08:58.169 "read": true, 00:08:58.169 "write": true, 00:08:58.169 "unmap": true, 00:08:58.169 "flush": true, 00:08:58.169 "reset": true, 00:08:58.169 "nvme_admin": false, 00:08:58.169 "nvme_io": false, 00:08:58.169 "nvme_io_md": false, 00:08:58.169 "write_zeroes": true, 00:08:58.169 "zcopy": false, 00:08:58.169 "get_zone_info": false, 00:08:58.169 "zone_management": false, 00:08:58.169 "zone_append": false, 00:08:58.169 "compare": false, 00:08:58.169 "compare_and_write": false, 00:08:58.169 "abort": false, 00:08:58.169 "seek_hole": false, 00:08:58.169 "seek_data": false, 00:08:58.169 "copy": false, 00:08:58.169 "nvme_iov_md": false 00:08:58.169 }, 00:08:58.169 "memory_domains": [ 00:08:58.169 { 00:08:58.169 "dma_device_id": "system", 00:08:58.169 "dma_device_type": 1 00:08:58.169 }, 00:08:58.169 { 00:08:58.169 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:58.169 "dma_device_type": 2 00:08:58.169 }, 00:08:58.169 { 00:08:58.169 "dma_device_id": "system", 00:08:58.169 "dma_device_type": 1 00:08:58.169 }, 00:08:58.169 { 00:08:58.169 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:58.169 "dma_device_type": 2 00:08:58.169 }, 00:08:58.169 { 00:08:58.169 "dma_device_id": "system", 00:08:58.169 "dma_device_type": 1 00:08:58.169 }, 00:08:58.169 { 00:08:58.169 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:58.169 "dma_device_type": 2 00:08:58.169 }, 00:08:58.169 { 00:08:58.169 "dma_device_id": "system", 00:08:58.169 "dma_device_type": 1 00:08:58.169 }, 00:08:58.169 { 00:08:58.169 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:58.169 "dma_device_type": 2 00:08:58.169 } 00:08:58.169 ], 00:08:58.169 "driver_specific": { 00:08:58.169 "raid": { 00:08:58.169 "uuid": "43e06db8-76c9-437e-b7ec-71f5688f5576", 00:08:58.169 "strip_size_kb": 64, 00:08:58.169 "state": "online", 00:08:58.169 "raid_level": "raid0", 00:08:58.169 "superblock": false, 00:08:58.169 "num_base_bdevs": 4, 00:08:58.169 "num_base_bdevs_discovered": 4, 00:08:58.169 "num_base_bdevs_operational": 4, 00:08:58.169 "base_bdevs_list": [ 00:08:58.169 { 00:08:58.169 "name": "BaseBdev1", 00:08:58.169 "uuid": "ca3ce218-b85b-48a9-a559-0a629f12c1a0", 00:08:58.169 "is_configured": true, 00:08:58.169 "data_offset": 0, 00:08:58.169 "data_size": 65536 00:08:58.169 }, 00:08:58.169 { 00:08:58.169 "name": "BaseBdev2", 00:08:58.169 "uuid": "dc56f4c3-86a0-4db4-b47f-0a46623313e6", 00:08:58.169 "is_configured": true, 00:08:58.169 "data_offset": 0, 00:08:58.169 "data_size": 65536 00:08:58.169 }, 00:08:58.169 { 00:08:58.169 "name": "BaseBdev3", 00:08:58.169 "uuid": "2f6243f8-5746-4074-b4c0-3f500da2e757", 00:08:58.169 "is_configured": true, 00:08:58.169 "data_offset": 0, 00:08:58.169 "data_size": 65536 00:08:58.169 }, 00:08:58.169 { 00:08:58.169 "name": "BaseBdev4", 00:08:58.169 "uuid": "e6bdc174-9e81-4032-8a78-6ff572f6a054", 00:08:58.169 "is_configured": true, 00:08:58.169 "data_offset": 0, 00:08:58.169 "data_size": 65536 00:08:58.169 } 00:08:58.169 ] 00:08:58.169 } 00:08:58.169 } 00:08:58.169 }' 00:08:58.169 18:39:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:58.169 18:39:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:58.169 BaseBdev2 00:08:58.169 BaseBdev3 00:08:58.169 BaseBdev4' 00:08:58.169 18:39:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:58.429 18:39:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:58.429 18:39:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:58.429 18:39:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:58.429 18:39:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.429 18:39:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.429 18:39:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:58.429 18:39:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.430 18:39:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:58.430 18:39:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:58.430 18:39:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:58.430 18:39:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:58.430 18:39:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:58.430 18:39:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.430 18:39:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.430 18:39:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.430 18:39:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:58.430 18:39:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:58.430 18:39:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:58.430 18:39:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:58.430 18:39:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:58.430 18:39:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.430 18:39:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.430 18:39:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.430 18:39:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:58.430 18:39:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:58.430 18:39:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:58.430 18:39:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:08:58.430 18:39:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:58.430 18:39:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.430 18:39:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.430 18:39:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.430 18:39:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:58.430 18:39:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:58.430 18:39:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:58.430 18:39:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.430 18:39:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.430 [2024-12-15 18:39:58.821031] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:58.430 [2024-12-15 18:39:58.821100] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:58.430 [2024-12-15 18:39:58.821172] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:58.430 18:39:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.430 18:39:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:58.430 18:39:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:08:58.430 18:39:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:58.430 18:39:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:58.430 18:39:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:58.430 18:39:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:08:58.430 18:39:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:58.430 18:39:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:58.430 18:39:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:58.430 18:39:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:58.430 18:39:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:58.430 18:39:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:58.430 18:39:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:58.430 18:39:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:58.430 18:39:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:58.430 18:39:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.430 18:39:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:58.430 18:39:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.430 18:39:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.430 18:39:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.690 18:39:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:58.690 "name": "Existed_Raid", 00:08:58.690 "uuid": "43e06db8-76c9-437e-b7ec-71f5688f5576", 00:08:58.690 "strip_size_kb": 64, 00:08:58.690 "state": "offline", 00:08:58.690 "raid_level": "raid0", 00:08:58.690 "superblock": false, 00:08:58.690 "num_base_bdevs": 4, 00:08:58.690 "num_base_bdevs_discovered": 3, 00:08:58.690 "num_base_bdevs_operational": 3, 00:08:58.690 "base_bdevs_list": [ 00:08:58.690 { 00:08:58.690 "name": null, 00:08:58.690 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:58.690 "is_configured": false, 00:08:58.690 "data_offset": 0, 00:08:58.690 "data_size": 65536 00:08:58.690 }, 00:08:58.690 { 00:08:58.690 "name": "BaseBdev2", 00:08:58.690 "uuid": "dc56f4c3-86a0-4db4-b47f-0a46623313e6", 00:08:58.690 "is_configured": true, 00:08:58.690 "data_offset": 0, 00:08:58.690 "data_size": 65536 00:08:58.690 }, 00:08:58.690 { 00:08:58.690 "name": "BaseBdev3", 00:08:58.690 "uuid": "2f6243f8-5746-4074-b4c0-3f500da2e757", 00:08:58.690 "is_configured": true, 00:08:58.690 "data_offset": 0, 00:08:58.690 "data_size": 65536 00:08:58.690 }, 00:08:58.690 { 00:08:58.690 "name": "BaseBdev4", 00:08:58.690 "uuid": "e6bdc174-9e81-4032-8a78-6ff572f6a054", 00:08:58.690 "is_configured": true, 00:08:58.690 "data_offset": 0, 00:08:58.690 "data_size": 65536 00:08:58.690 } 00:08:58.690 ] 00:08:58.690 }' 00:08:58.690 18:39:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:58.690 18:39:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.950 18:39:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:58.950 18:39:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:58.950 18:39:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.950 18:39:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.950 18:39:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.950 18:39:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:58.950 18:39:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.950 18:39:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:58.950 18:39:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:58.950 18:39:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:58.950 18:39:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.950 18:39:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.950 [2024-12-15 18:39:59.335338] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:58.950 18:39:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.950 18:39:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:58.950 18:39:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:58.950 18:39:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:58.950 18:39:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.950 18:39:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.950 18:39:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.950 18:39:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.950 18:39:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:58.950 18:39:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:58.950 18:39:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:58.950 18:39:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.950 18:39:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.211 [2024-12-15 18:39:59.394515] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:59.211 18:39:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.211 18:39:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:59.211 18:39:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:59.211 18:39:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.211 18:39:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:59.211 18:39:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.211 18:39:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.211 18:39:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.211 18:39:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:59.211 18:39:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:59.211 18:39:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:08:59.211 18:39:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.211 18:39:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.211 [2024-12-15 18:39:59.461744] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:08:59.211 [2024-12-15 18:39:59.461847] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:08:59.211 18:39:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.211 18:39:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:59.211 18:39:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:59.211 18:39:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.211 18:39:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:59.211 18:39:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.211 18:39:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.211 18:39:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.211 18:39:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:59.211 18:39:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:59.211 18:39:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:08:59.211 18:39:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:59.211 18:39:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:59.211 18:39:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:59.211 18:39:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.211 18:39:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.211 BaseBdev2 00:08:59.211 18:39:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.211 18:39:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:59.211 18:39:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:59.211 18:39:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:59.211 18:39:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:59.211 18:39:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:59.211 18:39:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:59.211 18:39:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:59.211 18:39:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.211 18:39:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.211 18:39:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.211 18:39:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:59.211 18:39:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.211 18:39:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.211 [ 00:08:59.211 { 00:08:59.211 "name": "BaseBdev2", 00:08:59.211 "aliases": [ 00:08:59.211 "bc3c715c-ede5-406d-b81c-f04924711452" 00:08:59.211 ], 00:08:59.211 "product_name": "Malloc disk", 00:08:59.211 "block_size": 512, 00:08:59.211 "num_blocks": 65536, 00:08:59.211 "uuid": "bc3c715c-ede5-406d-b81c-f04924711452", 00:08:59.211 "assigned_rate_limits": { 00:08:59.211 "rw_ios_per_sec": 0, 00:08:59.211 "rw_mbytes_per_sec": 0, 00:08:59.211 "r_mbytes_per_sec": 0, 00:08:59.211 "w_mbytes_per_sec": 0 00:08:59.211 }, 00:08:59.211 "claimed": false, 00:08:59.211 "zoned": false, 00:08:59.211 "supported_io_types": { 00:08:59.211 "read": true, 00:08:59.211 "write": true, 00:08:59.211 "unmap": true, 00:08:59.211 "flush": true, 00:08:59.211 "reset": true, 00:08:59.211 "nvme_admin": false, 00:08:59.211 "nvme_io": false, 00:08:59.211 "nvme_io_md": false, 00:08:59.211 "write_zeroes": true, 00:08:59.211 "zcopy": true, 00:08:59.211 "get_zone_info": false, 00:08:59.211 "zone_management": false, 00:08:59.211 "zone_append": false, 00:08:59.211 "compare": false, 00:08:59.211 "compare_and_write": false, 00:08:59.211 "abort": true, 00:08:59.211 "seek_hole": false, 00:08:59.211 "seek_data": false, 00:08:59.211 "copy": true, 00:08:59.211 "nvme_iov_md": false 00:08:59.211 }, 00:08:59.211 "memory_domains": [ 00:08:59.211 { 00:08:59.211 "dma_device_id": "system", 00:08:59.211 "dma_device_type": 1 00:08:59.211 }, 00:08:59.211 { 00:08:59.211 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:59.211 "dma_device_type": 2 00:08:59.211 } 00:08:59.211 ], 00:08:59.211 "driver_specific": {} 00:08:59.211 } 00:08:59.211 ] 00:08:59.211 18:39:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.211 18:39:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:59.211 18:39:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:59.211 18:39:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:59.211 18:39:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:59.211 18:39:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.211 18:39:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.211 BaseBdev3 00:08:59.211 18:39:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.211 18:39:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:59.211 18:39:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:59.211 18:39:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:59.211 18:39:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:59.211 18:39:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:59.211 18:39:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:59.211 18:39:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:59.211 18:39:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.211 18:39:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.211 18:39:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.211 18:39:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:59.211 18:39:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.211 18:39:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.211 [ 00:08:59.211 { 00:08:59.211 "name": "BaseBdev3", 00:08:59.211 "aliases": [ 00:08:59.211 "423dfd61-9ca4-404f-ac62-d9e28287166f" 00:08:59.211 ], 00:08:59.211 "product_name": "Malloc disk", 00:08:59.211 "block_size": 512, 00:08:59.211 "num_blocks": 65536, 00:08:59.211 "uuid": "423dfd61-9ca4-404f-ac62-d9e28287166f", 00:08:59.211 "assigned_rate_limits": { 00:08:59.211 "rw_ios_per_sec": 0, 00:08:59.211 "rw_mbytes_per_sec": 0, 00:08:59.211 "r_mbytes_per_sec": 0, 00:08:59.211 "w_mbytes_per_sec": 0 00:08:59.211 }, 00:08:59.211 "claimed": false, 00:08:59.211 "zoned": false, 00:08:59.211 "supported_io_types": { 00:08:59.211 "read": true, 00:08:59.211 "write": true, 00:08:59.211 "unmap": true, 00:08:59.211 "flush": true, 00:08:59.211 "reset": true, 00:08:59.211 "nvme_admin": false, 00:08:59.211 "nvme_io": false, 00:08:59.211 "nvme_io_md": false, 00:08:59.211 "write_zeroes": true, 00:08:59.211 "zcopy": true, 00:08:59.211 "get_zone_info": false, 00:08:59.211 "zone_management": false, 00:08:59.211 "zone_append": false, 00:08:59.211 "compare": false, 00:08:59.211 "compare_and_write": false, 00:08:59.211 "abort": true, 00:08:59.211 "seek_hole": false, 00:08:59.211 "seek_data": false, 00:08:59.211 "copy": true, 00:08:59.211 "nvme_iov_md": false 00:08:59.211 }, 00:08:59.211 "memory_domains": [ 00:08:59.211 { 00:08:59.211 "dma_device_id": "system", 00:08:59.211 "dma_device_type": 1 00:08:59.211 }, 00:08:59.211 { 00:08:59.211 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:59.211 "dma_device_type": 2 00:08:59.211 } 00:08:59.212 ], 00:08:59.212 "driver_specific": {} 00:08:59.212 } 00:08:59.212 ] 00:08:59.212 18:39:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.212 18:39:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:59.212 18:39:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:59.212 18:39:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:59.212 18:39:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:08:59.212 18:39:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.212 18:39:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.212 BaseBdev4 00:08:59.212 18:39:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.212 18:39:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:08:59.212 18:39:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:08:59.212 18:39:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:59.212 18:39:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:59.212 18:39:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:59.212 18:39:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:59.212 18:39:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:59.212 18:39:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.212 18:39:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.212 18:39:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.212 18:39:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:08:59.212 18:39:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.212 18:39:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.472 [ 00:08:59.472 { 00:08:59.472 "name": "BaseBdev4", 00:08:59.472 "aliases": [ 00:08:59.472 "2084ace7-5dea-439d-858f-f6e64102b27b" 00:08:59.472 ], 00:08:59.472 "product_name": "Malloc disk", 00:08:59.472 "block_size": 512, 00:08:59.472 "num_blocks": 65536, 00:08:59.472 "uuid": "2084ace7-5dea-439d-858f-f6e64102b27b", 00:08:59.472 "assigned_rate_limits": { 00:08:59.472 "rw_ios_per_sec": 0, 00:08:59.472 "rw_mbytes_per_sec": 0, 00:08:59.472 "r_mbytes_per_sec": 0, 00:08:59.472 "w_mbytes_per_sec": 0 00:08:59.472 }, 00:08:59.472 "claimed": false, 00:08:59.472 "zoned": false, 00:08:59.472 "supported_io_types": { 00:08:59.472 "read": true, 00:08:59.472 "write": true, 00:08:59.472 "unmap": true, 00:08:59.472 "flush": true, 00:08:59.472 "reset": true, 00:08:59.472 "nvme_admin": false, 00:08:59.472 "nvme_io": false, 00:08:59.472 "nvme_io_md": false, 00:08:59.472 "write_zeroes": true, 00:08:59.472 "zcopy": true, 00:08:59.472 "get_zone_info": false, 00:08:59.472 "zone_management": false, 00:08:59.472 "zone_append": false, 00:08:59.472 "compare": false, 00:08:59.472 "compare_and_write": false, 00:08:59.472 "abort": true, 00:08:59.472 "seek_hole": false, 00:08:59.472 "seek_data": false, 00:08:59.472 "copy": true, 00:08:59.472 "nvme_iov_md": false 00:08:59.472 }, 00:08:59.472 "memory_domains": [ 00:08:59.472 { 00:08:59.472 "dma_device_id": "system", 00:08:59.472 "dma_device_type": 1 00:08:59.472 }, 00:08:59.472 { 00:08:59.472 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:59.472 "dma_device_type": 2 00:08:59.472 } 00:08:59.472 ], 00:08:59.472 "driver_specific": {} 00:08:59.472 } 00:08:59.472 ] 00:08:59.472 18:39:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.472 18:39:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:59.472 18:39:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:59.472 18:39:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:59.472 18:39:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:08:59.472 18:39:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.472 18:39:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.472 [2024-12-15 18:39:59.670554] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:59.472 [2024-12-15 18:39:59.670649] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:59.472 [2024-12-15 18:39:59.670691] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:59.472 [2024-12-15 18:39:59.672568] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:59.472 [2024-12-15 18:39:59.672657] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:08:59.472 18:39:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.472 18:39:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:08:59.472 18:39:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:59.472 18:39:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:59.472 18:39:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:59.472 18:39:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:59.472 18:39:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:59.472 18:39:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:59.472 18:39:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:59.472 18:39:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:59.472 18:39:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:59.472 18:39:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.472 18:39:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.472 18:39:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:59.472 18:39:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.472 18:39:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.472 18:39:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:59.472 "name": "Existed_Raid", 00:08:59.472 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:59.472 "strip_size_kb": 64, 00:08:59.472 "state": "configuring", 00:08:59.472 "raid_level": "raid0", 00:08:59.472 "superblock": false, 00:08:59.472 "num_base_bdevs": 4, 00:08:59.472 "num_base_bdevs_discovered": 3, 00:08:59.472 "num_base_bdevs_operational": 4, 00:08:59.472 "base_bdevs_list": [ 00:08:59.472 { 00:08:59.472 "name": "BaseBdev1", 00:08:59.472 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:59.472 "is_configured": false, 00:08:59.472 "data_offset": 0, 00:08:59.472 "data_size": 0 00:08:59.472 }, 00:08:59.472 { 00:08:59.472 "name": "BaseBdev2", 00:08:59.472 "uuid": "bc3c715c-ede5-406d-b81c-f04924711452", 00:08:59.472 "is_configured": true, 00:08:59.472 "data_offset": 0, 00:08:59.472 "data_size": 65536 00:08:59.472 }, 00:08:59.472 { 00:08:59.472 "name": "BaseBdev3", 00:08:59.472 "uuid": "423dfd61-9ca4-404f-ac62-d9e28287166f", 00:08:59.472 "is_configured": true, 00:08:59.472 "data_offset": 0, 00:08:59.472 "data_size": 65536 00:08:59.472 }, 00:08:59.472 { 00:08:59.472 "name": "BaseBdev4", 00:08:59.472 "uuid": "2084ace7-5dea-439d-858f-f6e64102b27b", 00:08:59.472 "is_configured": true, 00:08:59.473 "data_offset": 0, 00:08:59.473 "data_size": 65536 00:08:59.473 } 00:08:59.473 ] 00:08:59.473 }' 00:08:59.473 18:39:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:59.473 18:39:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.733 18:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:59.733 18:40:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.733 18:40:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.733 [2024-12-15 18:40:00.133816] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:59.733 18:40:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.733 18:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:08:59.733 18:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:59.733 18:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:59.733 18:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:59.733 18:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:59.733 18:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:59.733 18:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:59.733 18:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:59.733 18:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:59.733 18:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:59.733 18:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:59.733 18:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.733 18:40:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.733 18:40:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.733 18:40:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.992 18:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:59.992 "name": "Existed_Raid", 00:08:59.992 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:59.992 "strip_size_kb": 64, 00:08:59.992 "state": "configuring", 00:08:59.992 "raid_level": "raid0", 00:08:59.992 "superblock": false, 00:08:59.992 "num_base_bdevs": 4, 00:08:59.992 "num_base_bdevs_discovered": 2, 00:08:59.992 "num_base_bdevs_operational": 4, 00:08:59.992 "base_bdevs_list": [ 00:08:59.992 { 00:08:59.992 "name": "BaseBdev1", 00:08:59.992 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:59.992 "is_configured": false, 00:08:59.992 "data_offset": 0, 00:08:59.992 "data_size": 0 00:08:59.992 }, 00:08:59.992 { 00:08:59.992 "name": null, 00:08:59.992 "uuid": "bc3c715c-ede5-406d-b81c-f04924711452", 00:08:59.992 "is_configured": false, 00:08:59.992 "data_offset": 0, 00:08:59.992 "data_size": 65536 00:08:59.992 }, 00:08:59.992 { 00:08:59.992 "name": "BaseBdev3", 00:08:59.992 "uuid": "423dfd61-9ca4-404f-ac62-d9e28287166f", 00:08:59.992 "is_configured": true, 00:08:59.992 "data_offset": 0, 00:08:59.992 "data_size": 65536 00:08:59.992 }, 00:08:59.992 { 00:08:59.992 "name": "BaseBdev4", 00:08:59.992 "uuid": "2084ace7-5dea-439d-858f-f6e64102b27b", 00:08:59.992 "is_configured": true, 00:08:59.992 "data_offset": 0, 00:08:59.992 "data_size": 65536 00:08:59.992 } 00:08:59.992 ] 00:08:59.992 }' 00:08:59.992 18:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:59.992 18:40:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.253 18:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.253 18:40:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.253 18:40:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.253 18:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:00.253 18:40:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.253 18:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:00.253 18:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:00.253 18:40:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.253 18:40:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.253 [2024-12-15 18:40:00.643898] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:00.253 BaseBdev1 00:09:00.253 18:40:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.253 18:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:00.253 18:40:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:00.253 18:40:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:00.253 18:40:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:00.253 18:40:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:00.253 18:40:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:00.253 18:40:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:00.253 18:40:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.253 18:40:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.253 18:40:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.253 18:40:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:00.253 18:40:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.253 18:40:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.253 [ 00:09:00.253 { 00:09:00.253 "name": "BaseBdev1", 00:09:00.253 "aliases": [ 00:09:00.253 "22503dbd-8f45-4756-8334-272f91ccffd2" 00:09:00.253 ], 00:09:00.253 "product_name": "Malloc disk", 00:09:00.253 "block_size": 512, 00:09:00.253 "num_blocks": 65536, 00:09:00.253 "uuid": "22503dbd-8f45-4756-8334-272f91ccffd2", 00:09:00.253 "assigned_rate_limits": { 00:09:00.253 "rw_ios_per_sec": 0, 00:09:00.253 "rw_mbytes_per_sec": 0, 00:09:00.253 "r_mbytes_per_sec": 0, 00:09:00.253 "w_mbytes_per_sec": 0 00:09:00.253 }, 00:09:00.253 "claimed": true, 00:09:00.253 "claim_type": "exclusive_write", 00:09:00.253 "zoned": false, 00:09:00.253 "supported_io_types": { 00:09:00.253 "read": true, 00:09:00.253 "write": true, 00:09:00.253 "unmap": true, 00:09:00.253 "flush": true, 00:09:00.253 "reset": true, 00:09:00.253 "nvme_admin": false, 00:09:00.253 "nvme_io": false, 00:09:00.253 "nvme_io_md": false, 00:09:00.253 "write_zeroes": true, 00:09:00.253 "zcopy": true, 00:09:00.253 "get_zone_info": false, 00:09:00.253 "zone_management": false, 00:09:00.253 "zone_append": false, 00:09:00.253 "compare": false, 00:09:00.253 "compare_and_write": false, 00:09:00.253 "abort": true, 00:09:00.253 "seek_hole": false, 00:09:00.253 "seek_data": false, 00:09:00.253 "copy": true, 00:09:00.253 "nvme_iov_md": false 00:09:00.253 }, 00:09:00.253 "memory_domains": [ 00:09:00.253 { 00:09:00.253 "dma_device_id": "system", 00:09:00.253 "dma_device_type": 1 00:09:00.253 }, 00:09:00.253 { 00:09:00.253 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:00.253 "dma_device_type": 2 00:09:00.253 } 00:09:00.253 ], 00:09:00.253 "driver_specific": {} 00:09:00.253 } 00:09:00.253 ] 00:09:00.253 18:40:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.253 18:40:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:00.253 18:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:00.253 18:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:00.253 18:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:00.253 18:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:00.253 18:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:00.253 18:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:00.253 18:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:00.253 18:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:00.253 18:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:00.253 18:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:00.253 18:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.253 18:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:00.253 18:40:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.253 18:40:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.513 18:40:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.513 18:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:00.513 "name": "Existed_Raid", 00:09:00.513 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:00.513 "strip_size_kb": 64, 00:09:00.513 "state": "configuring", 00:09:00.513 "raid_level": "raid0", 00:09:00.513 "superblock": false, 00:09:00.513 "num_base_bdevs": 4, 00:09:00.513 "num_base_bdevs_discovered": 3, 00:09:00.513 "num_base_bdevs_operational": 4, 00:09:00.513 "base_bdevs_list": [ 00:09:00.513 { 00:09:00.513 "name": "BaseBdev1", 00:09:00.513 "uuid": "22503dbd-8f45-4756-8334-272f91ccffd2", 00:09:00.513 "is_configured": true, 00:09:00.513 "data_offset": 0, 00:09:00.513 "data_size": 65536 00:09:00.513 }, 00:09:00.514 { 00:09:00.514 "name": null, 00:09:00.514 "uuid": "bc3c715c-ede5-406d-b81c-f04924711452", 00:09:00.514 "is_configured": false, 00:09:00.514 "data_offset": 0, 00:09:00.514 "data_size": 65536 00:09:00.514 }, 00:09:00.514 { 00:09:00.514 "name": "BaseBdev3", 00:09:00.514 "uuid": "423dfd61-9ca4-404f-ac62-d9e28287166f", 00:09:00.514 "is_configured": true, 00:09:00.514 "data_offset": 0, 00:09:00.514 "data_size": 65536 00:09:00.514 }, 00:09:00.514 { 00:09:00.514 "name": "BaseBdev4", 00:09:00.514 "uuid": "2084ace7-5dea-439d-858f-f6e64102b27b", 00:09:00.514 "is_configured": true, 00:09:00.514 "data_offset": 0, 00:09:00.514 "data_size": 65536 00:09:00.514 } 00:09:00.514 ] 00:09:00.514 }' 00:09:00.514 18:40:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:00.514 18:40:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.773 18:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:00.774 18:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.774 18:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.774 18:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.774 18:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.774 18:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:00.774 18:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:00.774 18:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.774 18:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.774 [2024-12-15 18:40:01.179044] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:00.774 18:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.774 18:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:00.774 18:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:00.774 18:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:00.774 18:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:00.774 18:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:00.774 18:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:00.774 18:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:00.774 18:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:00.774 18:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:00.774 18:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:00.774 18:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.774 18:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.774 18:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.774 18:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:00.774 18:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.033 18:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:01.033 "name": "Existed_Raid", 00:09:01.033 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:01.033 "strip_size_kb": 64, 00:09:01.033 "state": "configuring", 00:09:01.033 "raid_level": "raid0", 00:09:01.033 "superblock": false, 00:09:01.033 "num_base_bdevs": 4, 00:09:01.033 "num_base_bdevs_discovered": 2, 00:09:01.033 "num_base_bdevs_operational": 4, 00:09:01.033 "base_bdevs_list": [ 00:09:01.033 { 00:09:01.033 "name": "BaseBdev1", 00:09:01.033 "uuid": "22503dbd-8f45-4756-8334-272f91ccffd2", 00:09:01.033 "is_configured": true, 00:09:01.033 "data_offset": 0, 00:09:01.033 "data_size": 65536 00:09:01.033 }, 00:09:01.033 { 00:09:01.033 "name": null, 00:09:01.033 "uuid": "bc3c715c-ede5-406d-b81c-f04924711452", 00:09:01.033 "is_configured": false, 00:09:01.033 "data_offset": 0, 00:09:01.033 "data_size": 65536 00:09:01.033 }, 00:09:01.033 { 00:09:01.033 "name": null, 00:09:01.033 "uuid": "423dfd61-9ca4-404f-ac62-d9e28287166f", 00:09:01.033 "is_configured": false, 00:09:01.033 "data_offset": 0, 00:09:01.033 "data_size": 65536 00:09:01.033 }, 00:09:01.033 { 00:09:01.033 "name": "BaseBdev4", 00:09:01.033 "uuid": "2084ace7-5dea-439d-858f-f6e64102b27b", 00:09:01.033 "is_configured": true, 00:09:01.033 "data_offset": 0, 00:09:01.033 "data_size": 65536 00:09:01.033 } 00:09:01.033 ] 00:09:01.033 }' 00:09:01.033 18:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:01.033 18:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.293 18:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.293 18:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.293 18:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.293 18:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:01.293 18:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.293 18:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:01.293 18:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:01.293 18:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.293 18:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.293 [2024-12-15 18:40:01.698225] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:01.293 18:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.293 18:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:01.293 18:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:01.293 18:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:01.293 18:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:01.293 18:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:01.293 18:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:01.293 18:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:01.293 18:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:01.294 18:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:01.294 18:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:01.294 18:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:01.294 18:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.294 18:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.294 18:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.294 18:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.554 18:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:01.554 "name": "Existed_Raid", 00:09:01.554 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:01.554 "strip_size_kb": 64, 00:09:01.554 "state": "configuring", 00:09:01.554 "raid_level": "raid0", 00:09:01.554 "superblock": false, 00:09:01.554 "num_base_bdevs": 4, 00:09:01.554 "num_base_bdevs_discovered": 3, 00:09:01.554 "num_base_bdevs_operational": 4, 00:09:01.554 "base_bdevs_list": [ 00:09:01.554 { 00:09:01.554 "name": "BaseBdev1", 00:09:01.554 "uuid": "22503dbd-8f45-4756-8334-272f91ccffd2", 00:09:01.554 "is_configured": true, 00:09:01.554 "data_offset": 0, 00:09:01.554 "data_size": 65536 00:09:01.554 }, 00:09:01.554 { 00:09:01.554 "name": null, 00:09:01.554 "uuid": "bc3c715c-ede5-406d-b81c-f04924711452", 00:09:01.554 "is_configured": false, 00:09:01.554 "data_offset": 0, 00:09:01.554 "data_size": 65536 00:09:01.554 }, 00:09:01.554 { 00:09:01.554 "name": "BaseBdev3", 00:09:01.554 "uuid": "423dfd61-9ca4-404f-ac62-d9e28287166f", 00:09:01.554 "is_configured": true, 00:09:01.554 "data_offset": 0, 00:09:01.554 "data_size": 65536 00:09:01.554 }, 00:09:01.554 { 00:09:01.554 "name": "BaseBdev4", 00:09:01.554 "uuid": "2084ace7-5dea-439d-858f-f6e64102b27b", 00:09:01.554 "is_configured": true, 00:09:01.554 "data_offset": 0, 00:09:01.554 "data_size": 65536 00:09:01.554 } 00:09:01.554 ] 00:09:01.554 }' 00:09:01.554 18:40:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:01.554 18:40:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.814 18:40:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.814 18:40:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.814 18:40:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:01.814 18:40:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.814 18:40:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.814 18:40:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:01.814 18:40:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:01.814 18:40:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.814 18:40:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.814 [2024-12-15 18:40:02.189384] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:01.814 18:40:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.814 18:40:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:01.814 18:40:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:01.814 18:40:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:01.814 18:40:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:01.814 18:40:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:01.814 18:40:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:01.814 18:40:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:01.814 18:40:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:01.814 18:40:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:01.814 18:40:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:01.814 18:40:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:01.814 18:40:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.814 18:40:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.814 18:40:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.814 18:40:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.814 18:40:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:01.814 "name": "Existed_Raid", 00:09:01.814 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:01.814 "strip_size_kb": 64, 00:09:01.814 "state": "configuring", 00:09:01.814 "raid_level": "raid0", 00:09:01.814 "superblock": false, 00:09:01.814 "num_base_bdevs": 4, 00:09:01.814 "num_base_bdevs_discovered": 2, 00:09:01.814 "num_base_bdevs_operational": 4, 00:09:01.814 "base_bdevs_list": [ 00:09:01.814 { 00:09:01.814 "name": null, 00:09:01.814 "uuid": "22503dbd-8f45-4756-8334-272f91ccffd2", 00:09:01.814 "is_configured": false, 00:09:01.814 "data_offset": 0, 00:09:01.814 "data_size": 65536 00:09:01.814 }, 00:09:01.814 { 00:09:01.814 "name": null, 00:09:01.814 "uuid": "bc3c715c-ede5-406d-b81c-f04924711452", 00:09:01.814 "is_configured": false, 00:09:01.814 "data_offset": 0, 00:09:01.814 "data_size": 65536 00:09:01.814 }, 00:09:01.814 { 00:09:01.814 "name": "BaseBdev3", 00:09:01.814 "uuid": "423dfd61-9ca4-404f-ac62-d9e28287166f", 00:09:01.814 "is_configured": true, 00:09:01.814 "data_offset": 0, 00:09:01.814 "data_size": 65536 00:09:01.814 }, 00:09:01.814 { 00:09:01.814 "name": "BaseBdev4", 00:09:01.814 "uuid": "2084ace7-5dea-439d-858f-f6e64102b27b", 00:09:01.814 "is_configured": true, 00:09:01.814 "data_offset": 0, 00:09:01.814 "data_size": 65536 00:09:01.814 } 00:09:01.814 ] 00:09:01.814 }' 00:09:02.074 18:40:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:02.074 18:40:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.334 18:40:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:02.334 18:40:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:02.334 18:40:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.334 18:40:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.334 18:40:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.334 18:40:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:02.334 18:40:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:02.334 18:40:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.334 18:40:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.334 [2024-12-15 18:40:02.694978] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:02.334 18:40:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.334 18:40:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:02.334 18:40:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:02.334 18:40:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:02.334 18:40:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:02.334 18:40:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:02.334 18:40:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:02.334 18:40:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:02.334 18:40:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:02.334 18:40:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:02.334 18:40:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:02.334 18:40:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:02.334 18:40:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:02.334 18:40:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.334 18:40:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.334 18:40:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.334 18:40:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:02.334 "name": "Existed_Raid", 00:09:02.334 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:02.334 "strip_size_kb": 64, 00:09:02.334 "state": "configuring", 00:09:02.334 "raid_level": "raid0", 00:09:02.334 "superblock": false, 00:09:02.334 "num_base_bdevs": 4, 00:09:02.334 "num_base_bdevs_discovered": 3, 00:09:02.334 "num_base_bdevs_operational": 4, 00:09:02.334 "base_bdevs_list": [ 00:09:02.334 { 00:09:02.334 "name": null, 00:09:02.334 "uuid": "22503dbd-8f45-4756-8334-272f91ccffd2", 00:09:02.334 "is_configured": false, 00:09:02.334 "data_offset": 0, 00:09:02.334 "data_size": 65536 00:09:02.334 }, 00:09:02.334 { 00:09:02.334 "name": "BaseBdev2", 00:09:02.334 "uuid": "bc3c715c-ede5-406d-b81c-f04924711452", 00:09:02.334 "is_configured": true, 00:09:02.334 "data_offset": 0, 00:09:02.334 "data_size": 65536 00:09:02.334 }, 00:09:02.334 { 00:09:02.334 "name": "BaseBdev3", 00:09:02.334 "uuid": "423dfd61-9ca4-404f-ac62-d9e28287166f", 00:09:02.334 "is_configured": true, 00:09:02.334 "data_offset": 0, 00:09:02.334 "data_size": 65536 00:09:02.334 }, 00:09:02.334 { 00:09:02.334 "name": "BaseBdev4", 00:09:02.334 "uuid": "2084ace7-5dea-439d-858f-f6e64102b27b", 00:09:02.334 "is_configured": true, 00:09:02.334 "data_offset": 0, 00:09:02.334 "data_size": 65536 00:09:02.334 } 00:09:02.334 ] 00:09:02.334 }' 00:09:02.334 18:40:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:02.334 18:40:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.904 18:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:02.904 18:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:02.904 18:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.904 18:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.904 18:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.904 18:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:02.904 18:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:02.904 18:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:02.904 18:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.904 18:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.904 18:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.904 18:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 22503dbd-8f45-4756-8334-272f91ccffd2 00:09:02.904 18:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.904 18:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.904 [2024-12-15 18:40:03.229092] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:02.904 [2024-12-15 18:40:03.229208] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:09:02.904 [2024-12-15 18:40:03.229233] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:09:02.904 [2024-12-15 18:40:03.229528] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:02.904 [2024-12-15 18:40:03.229677] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:09:02.904 [2024-12-15 18:40:03.229717] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:09:02.904 [2024-12-15 18:40:03.229938] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:02.904 NewBaseBdev 00:09:02.904 18:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.904 18:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:02.904 18:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:09:02.904 18:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:02.904 18:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:02.904 18:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:02.904 18:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:02.904 18:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:02.904 18:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.904 18:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.904 18:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.904 18:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:02.904 18:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.904 18:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.904 [ 00:09:02.904 { 00:09:02.904 "name": "NewBaseBdev", 00:09:02.904 "aliases": [ 00:09:02.904 "22503dbd-8f45-4756-8334-272f91ccffd2" 00:09:02.904 ], 00:09:02.904 "product_name": "Malloc disk", 00:09:02.904 "block_size": 512, 00:09:02.904 "num_blocks": 65536, 00:09:02.904 "uuid": "22503dbd-8f45-4756-8334-272f91ccffd2", 00:09:02.904 "assigned_rate_limits": { 00:09:02.904 "rw_ios_per_sec": 0, 00:09:02.904 "rw_mbytes_per_sec": 0, 00:09:02.904 "r_mbytes_per_sec": 0, 00:09:02.904 "w_mbytes_per_sec": 0 00:09:02.904 }, 00:09:02.904 "claimed": true, 00:09:02.904 "claim_type": "exclusive_write", 00:09:02.904 "zoned": false, 00:09:02.904 "supported_io_types": { 00:09:02.904 "read": true, 00:09:02.904 "write": true, 00:09:02.904 "unmap": true, 00:09:02.904 "flush": true, 00:09:02.904 "reset": true, 00:09:02.904 "nvme_admin": false, 00:09:02.904 "nvme_io": false, 00:09:02.904 "nvme_io_md": false, 00:09:02.904 "write_zeroes": true, 00:09:02.904 "zcopy": true, 00:09:02.904 "get_zone_info": false, 00:09:02.904 "zone_management": false, 00:09:02.904 "zone_append": false, 00:09:02.904 "compare": false, 00:09:02.904 "compare_and_write": false, 00:09:02.904 "abort": true, 00:09:02.904 "seek_hole": false, 00:09:02.904 "seek_data": false, 00:09:02.904 "copy": true, 00:09:02.904 "nvme_iov_md": false 00:09:02.904 }, 00:09:02.904 "memory_domains": [ 00:09:02.904 { 00:09:02.904 "dma_device_id": "system", 00:09:02.904 "dma_device_type": 1 00:09:02.904 }, 00:09:02.904 { 00:09:02.904 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:02.904 "dma_device_type": 2 00:09:02.904 } 00:09:02.904 ], 00:09:02.904 "driver_specific": {} 00:09:02.904 } 00:09:02.904 ] 00:09:02.904 18:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.904 18:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:02.904 18:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:09:02.904 18:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:02.904 18:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:02.904 18:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:02.904 18:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:02.904 18:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:02.904 18:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:02.904 18:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:02.904 18:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:02.904 18:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:02.904 18:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:02.904 18:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:02.904 18:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.904 18:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.904 18:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.904 18:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:02.904 "name": "Existed_Raid", 00:09:02.904 "uuid": "bdaa1a0a-55a9-45f6-878f-6a75fa9604bb", 00:09:02.904 "strip_size_kb": 64, 00:09:02.904 "state": "online", 00:09:02.904 "raid_level": "raid0", 00:09:02.904 "superblock": false, 00:09:02.904 "num_base_bdevs": 4, 00:09:02.904 "num_base_bdevs_discovered": 4, 00:09:02.904 "num_base_bdevs_operational": 4, 00:09:02.904 "base_bdevs_list": [ 00:09:02.904 { 00:09:02.904 "name": "NewBaseBdev", 00:09:02.904 "uuid": "22503dbd-8f45-4756-8334-272f91ccffd2", 00:09:02.904 "is_configured": true, 00:09:02.904 "data_offset": 0, 00:09:02.904 "data_size": 65536 00:09:02.904 }, 00:09:02.904 { 00:09:02.904 "name": "BaseBdev2", 00:09:02.904 "uuid": "bc3c715c-ede5-406d-b81c-f04924711452", 00:09:02.904 "is_configured": true, 00:09:02.904 "data_offset": 0, 00:09:02.904 "data_size": 65536 00:09:02.904 }, 00:09:02.904 { 00:09:02.904 "name": "BaseBdev3", 00:09:02.904 "uuid": "423dfd61-9ca4-404f-ac62-d9e28287166f", 00:09:02.904 "is_configured": true, 00:09:02.904 "data_offset": 0, 00:09:02.904 "data_size": 65536 00:09:02.904 }, 00:09:02.904 { 00:09:02.904 "name": "BaseBdev4", 00:09:02.904 "uuid": "2084ace7-5dea-439d-858f-f6e64102b27b", 00:09:02.904 "is_configured": true, 00:09:02.904 "data_offset": 0, 00:09:02.904 "data_size": 65536 00:09:02.904 } 00:09:02.904 ] 00:09:02.904 }' 00:09:02.904 18:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:02.904 18:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.473 18:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:03.473 18:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:03.473 18:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:03.473 18:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:03.473 18:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:03.473 18:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:03.473 18:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:03.473 18:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:03.474 18:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.474 18:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.474 [2024-12-15 18:40:03.696739] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:03.474 18:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.474 18:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:03.474 "name": "Existed_Raid", 00:09:03.474 "aliases": [ 00:09:03.474 "bdaa1a0a-55a9-45f6-878f-6a75fa9604bb" 00:09:03.474 ], 00:09:03.474 "product_name": "Raid Volume", 00:09:03.474 "block_size": 512, 00:09:03.474 "num_blocks": 262144, 00:09:03.474 "uuid": "bdaa1a0a-55a9-45f6-878f-6a75fa9604bb", 00:09:03.474 "assigned_rate_limits": { 00:09:03.474 "rw_ios_per_sec": 0, 00:09:03.474 "rw_mbytes_per_sec": 0, 00:09:03.474 "r_mbytes_per_sec": 0, 00:09:03.474 "w_mbytes_per_sec": 0 00:09:03.474 }, 00:09:03.474 "claimed": false, 00:09:03.474 "zoned": false, 00:09:03.474 "supported_io_types": { 00:09:03.474 "read": true, 00:09:03.474 "write": true, 00:09:03.474 "unmap": true, 00:09:03.474 "flush": true, 00:09:03.474 "reset": true, 00:09:03.474 "nvme_admin": false, 00:09:03.474 "nvme_io": false, 00:09:03.474 "nvme_io_md": false, 00:09:03.474 "write_zeroes": true, 00:09:03.474 "zcopy": false, 00:09:03.474 "get_zone_info": false, 00:09:03.474 "zone_management": false, 00:09:03.474 "zone_append": false, 00:09:03.474 "compare": false, 00:09:03.474 "compare_and_write": false, 00:09:03.474 "abort": false, 00:09:03.474 "seek_hole": false, 00:09:03.474 "seek_data": false, 00:09:03.474 "copy": false, 00:09:03.474 "nvme_iov_md": false 00:09:03.474 }, 00:09:03.474 "memory_domains": [ 00:09:03.474 { 00:09:03.474 "dma_device_id": "system", 00:09:03.474 "dma_device_type": 1 00:09:03.474 }, 00:09:03.474 { 00:09:03.474 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:03.474 "dma_device_type": 2 00:09:03.474 }, 00:09:03.474 { 00:09:03.474 "dma_device_id": "system", 00:09:03.474 "dma_device_type": 1 00:09:03.474 }, 00:09:03.474 { 00:09:03.474 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:03.474 "dma_device_type": 2 00:09:03.474 }, 00:09:03.474 { 00:09:03.474 "dma_device_id": "system", 00:09:03.474 "dma_device_type": 1 00:09:03.474 }, 00:09:03.474 { 00:09:03.474 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:03.474 "dma_device_type": 2 00:09:03.474 }, 00:09:03.474 { 00:09:03.474 "dma_device_id": "system", 00:09:03.474 "dma_device_type": 1 00:09:03.474 }, 00:09:03.474 { 00:09:03.474 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:03.474 "dma_device_type": 2 00:09:03.474 } 00:09:03.474 ], 00:09:03.474 "driver_specific": { 00:09:03.474 "raid": { 00:09:03.474 "uuid": "bdaa1a0a-55a9-45f6-878f-6a75fa9604bb", 00:09:03.474 "strip_size_kb": 64, 00:09:03.474 "state": "online", 00:09:03.474 "raid_level": "raid0", 00:09:03.474 "superblock": false, 00:09:03.474 "num_base_bdevs": 4, 00:09:03.474 "num_base_bdevs_discovered": 4, 00:09:03.474 "num_base_bdevs_operational": 4, 00:09:03.474 "base_bdevs_list": [ 00:09:03.474 { 00:09:03.474 "name": "NewBaseBdev", 00:09:03.474 "uuid": "22503dbd-8f45-4756-8334-272f91ccffd2", 00:09:03.474 "is_configured": true, 00:09:03.474 "data_offset": 0, 00:09:03.474 "data_size": 65536 00:09:03.474 }, 00:09:03.474 { 00:09:03.474 "name": "BaseBdev2", 00:09:03.474 "uuid": "bc3c715c-ede5-406d-b81c-f04924711452", 00:09:03.474 "is_configured": true, 00:09:03.474 "data_offset": 0, 00:09:03.474 "data_size": 65536 00:09:03.474 }, 00:09:03.474 { 00:09:03.474 "name": "BaseBdev3", 00:09:03.474 "uuid": "423dfd61-9ca4-404f-ac62-d9e28287166f", 00:09:03.474 "is_configured": true, 00:09:03.474 "data_offset": 0, 00:09:03.474 "data_size": 65536 00:09:03.474 }, 00:09:03.474 { 00:09:03.474 "name": "BaseBdev4", 00:09:03.474 "uuid": "2084ace7-5dea-439d-858f-f6e64102b27b", 00:09:03.474 "is_configured": true, 00:09:03.474 "data_offset": 0, 00:09:03.474 "data_size": 65536 00:09:03.474 } 00:09:03.474 ] 00:09:03.474 } 00:09:03.474 } 00:09:03.474 }' 00:09:03.474 18:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:03.474 18:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:03.474 BaseBdev2 00:09:03.474 BaseBdev3 00:09:03.474 BaseBdev4' 00:09:03.474 18:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:03.474 18:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:03.474 18:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:03.474 18:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:03.474 18:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:03.474 18:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.474 18:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.474 18:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.474 18:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:03.474 18:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:03.474 18:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:03.474 18:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:03.474 18:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:03.474 18:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.474 18:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.474 18:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.734 18:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:03.734 18:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:03.734 18:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:03.734 18:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:03.734 18:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:03.734 18:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.734 18:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.734 18:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.734 18:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:03.734 18:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:03.734 18:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:03.734 18:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:09:03.734 18:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.734 18:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.734 18:40:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:03.734 18:40:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.734 18:40:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:03.734 18:40:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:03.735 18:40:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:03.735 18:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.735 18:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.735 [2024-12-15 18:40:04.035820] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:03.735 [2024-12-15 18:40:04.035889] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:03.735 [2024-12-15 18:40:04.035986] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:03.735 [2024-12-15 18:40:04.036070] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:03.735 [2024-12-15 18:40:04.036130] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:09:03.735 18:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.735 18:40:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 82276 00:09:03.735 18:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 82276 ']' 00:09:03.735 18:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 82276 00:09:03.735 18:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:09:03.735 18:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:03.735 18:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82276 00:09:03.735 killing process with pid 82276 00:09:03.735 18:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:03.735 18:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:03.735 18:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82276' 00:09:03.735 18:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 82276 00:09:03.735 [2024-12-15 18:40:04.081880] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:03.735 18:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 82276 00:09:03.735 [2024-12-15 18:40:04.123932] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:03.995 18:40:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:03.995 00:09:03.995 real 0m9.712s 00:09:03.995 user 0m16.545s 00:09:03.995 sys 0m2.130s 00:09:03.995 18:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:03.995 18:40:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.995 ************************************ 00:09:03.995 END TEST raid_state_function_test 00:09:03.995 ************************************ 00:09:03.995 18:40:04 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:09:03.995 18:40:04 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:03.995 18:40:04 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:03.995 18:40:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:03.995 ************************************ 00:09:03.995 START TEST raid_state_function_test_sb 00:09:03.995 ************************************ 00:09:03.995 18:40:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 true 00:09:03.995 18:40:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:09:03.995 18:40:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:09:03.995 18:40:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:03.995 18:40:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:03.995 18:40:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:03.995 18:40:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:03.995 18:40:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:03.995 18:40:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:03.995 18:40:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:03.995 18:40:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:03.995 18:40:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:03.995 18:40:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:03.995 18:40:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:03.995 18:40:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:03.995 18:40:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:03.995 18:40:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:09:03.995 18:40:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:03.995 18:40:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:03.995 18:40:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:09:04.254 18:40:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:04.254 18:40:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:04.254 18:40:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:04.255 18:40:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:04.255 18:40:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:04.255 18:40:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:09:04.255 18:40:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:04.255 18:40:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:04.255 18:40:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:04.255 18:40:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:04.255 18:40:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=82925 00:09:04.255 18:40:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:04.255 18:40:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 82925' 00:09:04.255 Process raid pid: 82925 00:09:04.255 18:40:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 82925 00:09:04.255 18:40:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 82925 ']' 00:09:04.255 18:40:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:04.255 18:40:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:04.255 18:40:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:04.255 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:04.255 18:40:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:04.255 18:40:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.255 [2024-12-15 18:40:04.526234] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:09:04.255 [2024-12-15 18:40:04.526466] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:04.255 [2024-12-15 18:40:04.688448] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:04.514 [2024-12-15 18:40:04.714228] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:04.514 [2024-12-15 18:40:04.756792] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:04.514 [2024-12-15 18:40:04.756950] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:05.098 18:40:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:05.098 18:40:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:09:05.098 18:40:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:05.098 18:40:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.098 18:40:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.098 [2024-12-15 18:40:05.347693] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:05.098 [2024-12-15 18:40:05.347792] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:05.098 [2024-12-15 18:40:05.347828] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:05.098 [2024-12-15 18:40:05.347839] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:05.098 [2024-12-15 18:40:05.347846] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:05.098 [2024-12-15 18:40:05.347856] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:05.098 [2024-12-15 18:40:05.347862] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:05.098 [2024-12-15 18:40:05.347870] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:05.098 18:40:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.098 18:40:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:05.098 18:40:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:05.098 18:40:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:05.098 18:40:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:05.098 18:40:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:05.098 18:40:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:05.098 18:40:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:05.098 18:40:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:05.098 18:40:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:05.098 18:40:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:05.098 18:40:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:05.098 18:40:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.098 18:40:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.098 18:40:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:05.098 18:40:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.098 18:40:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:05.098 "name": "Existed_Raid", 00:09:05.098 "uuid": "03f2d9f5-d090-429f-a31d-49e947b0e8b9", 00:09:05.098 "strip_size_kb": 64, 00:09:05.098 "state": "configuring", 00:09:05.098 "raid_level": "raid0", 00:09:05.098 "superblock": true, 00:09:05.098 "num_base_bdevs": 4, 00:09:05.098 "num_base_bdevs_discovered": 0, 00:09:05.098 "num_base_bdevs_operational": 4, 00:09:05.098 "base_bdevs_list": [ 00:09:05.098 { 00:09:05.098 "name": "BaseBdev1", 00:09:05.098 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:05.098 "is_configured": false, 00:09:05.098 "data_offset": 0, 00:09:05.098 "data_size": 0 00:09:05.098 }, 00:09:05.098 { 00:09:05.098 "name": "BaseBdev2", 00:09:05.098 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:05.098 "is_configured": false, 00:09:05.098 "data_offset": 0, 00:09:05.098 "data_size": 0 00:09:05.098 }, 00:09:05.098 { 00:09:05.098 "name": "BaseBdev3", 00:09:05.098 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:05.098 "is_configured": false, 00:09:05.098 "data_offset": 0, 00:09:05.098 "data_size": 0 00:09:05.098 }, 00:09:05.098 { 00:09:05.098 "name": "BaseBdev4", 00:09:05.098 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:05.098 "is_configured": false, 00:09:05.098 "data_offset": 0, 00:09:05.098 "data_size": 0 00:09:05.098 } 00:09:05.098 ] 00:09:05.098 }' 00:09:05.098 18:40:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:05.098 18:40:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.358 18:40:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:05.358 18:40:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.358 18:40:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.358 [2024-12-15 18:40:05.766901] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:05.358 [2024-12-15 18:40:05.766982] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:09:05.358 18:40:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.358 18:40:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:05.358 18:40:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.358 18:40:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.358 [2024-12-15 18:40:05.778889] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:05.358 [2024-12-15 18:40:05.778966] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:05.358 [2024-12-15 18:40:05.778993] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:05.358 [2024-12-15 18:40:05.779017] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:05.358 [2024-12-15 18:40:05.779034] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:05.358 [2024-12-15 18:40:05.779055] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:05.358 [2024-12-15 18:40:05.779103] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:05.358 [2024-12-15 18:40:05.779135] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:05.358 18:40:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.358 18:40:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:05.358 18:40:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.358 18:40:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.618 [2024-12-15 18:40:05.799905] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:05.618 BaseBdev1 00:09:05.618 18:40:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.618 18:40:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:05.618 18:40:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:05.618 18:40:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:05.618 18:40:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:05.618 18:40:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:05.618 18:40:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:05.618 18:40:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:05.618 18:40:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.618 18:40:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.618 18:40:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.618 18:40:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:05.618 18:40:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.618 18:40:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.618 [ 00:09:05.618 { 00:09:05.618 "name": "BaseBdev1", 00:09:05.618 "aliases": [ 00:09:05.618 "a93b7149-ae94-4435-87cc-1d9418e7a871" 00:09:05.618 ], 00:09:05.618 "product_name": "Malloc disk", 00:09:05.618 "block_size": 512, 00:09:05.618 "num_blocks": 65536, 00:09:05.618 "uuid": "a93b7149-ae94-4435-87cc-1d9418e7a871", 00:09:05.618 "assigned_rate_limits": { 00:09:05.618 "rw_ios_per_sec": 0, 00:09:05.618 "rw_mbytes_per_sec": 0, 00:09:05.618 "r_mbytes_per_sec": 0, 00:09:05.618 "w_mbytes_per_sec": 0 00:09:05.618 }, 00:09:05.618 "claimed": true, 00:09:05.618 "claim_type": "exclusive_write", 00:09:05.618 "zoned": false, 00:09:05.618 "supported_io_types": { 00:09:05.618 "read": true, 00:09:05.618 "write": true, 00:09:05.618 "unmap": true, 00:09:05.618 "flush": true, 00:09:05.618 "reset": true, 00:09:05.618 "nvme_admin": false, 00:09:05.618 "nvme_io": false, 00:09:05.618 "nvme_io_md": false, 00:09:05.618 "write_zeroes": true, 00:09:05.618 "zcopy": true, 00:09:05.618 "get_zone_info": false, 00:09:05.618 "zone_management": false, 00:09:05.618 "zone_append": false, 00:09:05.618 "compare": false, 00:09:05.618 "compare_and_write": false, 00:09:05.618 "abort": true, 00:09:05.618 "seek_hole": false, 00:09:05.618 "seek_data": false, 00:09:05.618 "copy": true, 00:09:05.618 "nvme_iov_md": false 00:09:05.618 }, 00:09:05.618 "memory_domains": [ 00:09:05.618 { 00:09:05.618 "dma_device_id": "system", 00:09:05.618 "dma_device_type": 1 00:09:05.618 }, 00:09:05.618 { 00:09:05.618 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:05.618 "dma_device_type": 2 00:09:05.618 } 00:09:05.618 ], 00:09:05.618 "driver_specific": {} 00:09:05.618 } 00:09:05.618 ] 00:09:05.618 18:40:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.618 18:40:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:05.618 18:40:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:05.618 18:40:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:05.618 18:40:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:05.618 18:40:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:05.618 18:40:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:05.618 18:40:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:05.618 18:40:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:05.618 18:40:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:05.618 18:40:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:05.618 18:40:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:05.618 18:40:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:05.618 18:40:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:05.618 18:40:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.618 18:40:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.618 18:40:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.618 18:40:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:05.618 "name": "Existed_Raid", 00:09:05.618 "uuid": "3c95629b-9980-4c02-9286-30381cd72eb1", 00:09:05.618 "strip_size_kb": 64, 00:09:05.618 "state": "configuring", 00:09:05.618 "raid_level": "raid0", 00:09:05.618 "superblock": true, 00:09:05.618 "num_base_bdevs": 4, 00:09:05.618 "num_base_bdevs_discovered": 1, 00:09:05.618 "num_base_bdevs_operational": 4, 00:09:05.618 "base_bdevs_list": [ 00:09:05.618 { 00:09:05.618 "name": "BaseBdev1", 00:09:05.618 "uuid": "a93b7149-ae94-4435-87cc-1d9418e7a871", 00:09:05.618 "is_configured": true, 00:09:05.618 "data_offset": 2048, 00:09:05.618 "data_size": 63488 00:09:05.618 }, 00:09:05.618 { 00:09:05.618 "name": "BaseBdev2", 00:09:05.618 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:05.618 "is_configured": false, 00:09:05.618 "data_offset": 0, 00:09:05.618 "data_size": 0 00:09:05.618 }, 00:09:05.618 { 00:09:05.618 "name": "BaseBdev3", 00:09:05.618 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:05.618 "is_configured": false, 00:09:05.618 "data_offset": 0, 00:09:05.618 "data_size": 0 00:09:05.618 }, 00:09:05.618 { 00:09:05.618 "name": "BaseBdev4", 00:09:05.618 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:05.618 "is_configured": false, 00:09:05.618 "data_offset": 0, 00:09:05.618 "data_size": 0 00:09:05.618 } 00:09:05.618 ] 00:09:05.618 }' 00:09:05.618 18:40:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:05.618 18:40:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.879 18:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:05.879 18:40:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.879 18:40:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.879 [2024-12-15 18:40:06.295061] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:05.879 [2024-12-15 18:40:06.295163] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:09:05.879 18:40:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.879 18:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:05.879 18:40:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.879 18:40:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.879 [2024-12-15 18:40:06.307078] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:05.879 [2024-12-15 18:40:06.308934] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:05.879 [2024-12-15 18:40:06.309004] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:05.879 [2024-12-15 18:40:06.309032] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:05.879 [2024-12-15 18:40:06.309055] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:05.879 [2024-12-15 18:40:06.309073] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:05.879 [2024-12-15 18:40:06.309092] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:05.879 18:40:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.879 18:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:05.879 18:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:05.879 18:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:05.879 18:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:05.879 18:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:05.879 18:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:05.879 18:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:05.879 18:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:05.879 18:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:05.879 18:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:05.879 18:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:05.879 18:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:06.139 18:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:06.139 18:40:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.139 18:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:06.139 18:40:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.139 18:40:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.139 18:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:06.139 "name": "Existed_Raid", 00:09:06.139 "uuid": "93571515-c652-4f67-b298-8f5b2928a223", 00:09:06.139 "strip_size_kb": 64, 00:09:06.139 "state": "configuring", 00:09:06.139 "raid_level": "raid0", 00:09:06.139 "superblock": true, 00:09:06.139 "num_base_bdevs": 4, 00:09:06.139 "num_base_bdevs_discovered": 1, 00:09:06.139 "num_base_bdevs_operational": 4, 00:09:06.139 "base_bdevs_list": [ 00:09:06.139 { 00:09:06.139 "name": "BaseBdev1", 00:09:06.139 "uuid": "a93b7149-ae94-4435-87cc-1d9418e7a871", 00:09:06.139 "is_configured": true, 00:09:06.139 "data_offset": 2048, 00:09:06.139 "data_size": 63488 00:09:06.139 }, 00:09:06.139 { 00:09:06.139 "name": "BaseBdev2", 00:09:06.139 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:06.139 "is_configured": false, 00:09:06.139 "data_offset": 0, 00:09:06.139 "data_size": 0 00:09:06.139 }, 00:09:06.139 { 00:09:06.139 "name": "BaseBdev3", 00:09:06.139 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:06.139 "is_configured": false, 00:09:06.139 "data_offset": 0, 00:09:06.139 "data_size": 0 00:09:06.139 }, 00:09:06.139 { 00:09:06.139 "name": "BaseBdev4", 00:09:06.139 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:06.139 "is_configured": false, 00:09:06.139 "data_offset": 0, 00:09:06.139 "data_size": 0 00:09:06.139 } 00:09:06.139 ] 00:09:06.139 }' 00:09:06.139 18:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:06.139 18:40:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.399 18:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:06.399 18:40:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.399 18:40:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.399 [2024-12-15 18:40:06.777227] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:06.399 BaseBdev2 00:09:06.399 18:40:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.399 18:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:06.399 18:40:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:06.399 18:40:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:06.399 18:40:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:06.399 18:40:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:06.399 18:40:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:06.399 18:40:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:06.399 18:40:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.399 18:40:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.399 18:40:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.399 18:40:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:06.399 18:40:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.399 18:40:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.399 [ 00:09:06.399 { 00:09:06.399 "name": "BaseBdev2", 00:09:06.399 "aliases": [ 00:09:06.399 "26b9c027-3007-49d4-8036-5425a4ec43b5" 00:09:06.399 ], 00:09:06.399 "product_name": "Malloc disk", 00:09:06.399 "block_size": 512, 00:09:06.399 "num_blocks": 65536, 00:09:06.399 "uuid": "26b9c027-3007-49d4-8036-5425a4ec43b5", 00:09:06.399 "assigned_rate_limits": { 00:09:06.399 "rw_ios_per_sec": 0, 00:09:06.399 "rw_mbytes_per_sec": 0, 00:09:06.399 "r_mbytes_per_sec": 0, 00:09:06.399 "w_mbytes_per_sec": 0 00:09:06.399 }, 00:09:06.399 "claimed": true, 00:09:06.399 "claim_type": "exclusive_write", 00:09:06.399 "zoned": false, 00:09:06.399 "supported_io_types": { 00:09:06.399 "read": true, 00:09:06.399 "write": true, 00:09:06.399 "unmap": true, 00:09:06.399 "flush": true, 00:09:06.399 "reset": true, 00:09:06.399 "nvme_admin": false, 00:09:06.399 "nvme_io": false, 00:09:06.399 "nvme_io_md": false, 00:09:06.399 "write_zeroes": true, 00:09:06.399 "zcopy": true, 00:09:06.399 "get_zone_info": false, 00:09:06.399 "zone_management": false, 00:09:06.399 "zone_append": false, 00:09:06.399 "compare": false, 00:09:06.399 "compare_and_write": false, 00:09:06.399 "abort": true, 00:09:06.399 "seek_hole": false, 00:09:06.399 "seek_data": false, 00:09:06.399 "copy": true, 00:09:06.399 "nvme_iov_md": false 00:09:06.399 }, 00:09:06.399 "memory_domains": [ 00:09:06.399 { 00:09:06.399 "dma_device_id": "system", 00:09:06.399 "dma_device_type": 1 00:09:06.399 }, 00:09:06.399 { 00:09:06.399 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:06.399 "dma_device_type": 2 00:09:06.399 } 00:09:06.399 ], 00:09:06.399 "driver_specific": {} 00:09:06.399 } 00:09:06.399 ] 00:09:06.399 18:40:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.399 18:40:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:06.399 18:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:06.399 18:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:06.399 18:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:06.399 18:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:06.399 18:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:06.399 18:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:06.399 18:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:06.399 18:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:06.399 18:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:06.399 18:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:06.399 18:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:06.399 18:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:06.399 18:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:06.399 18:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:06.399 18:40:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.399 18:40:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.399 18:40:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.663 18:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:06.663 "name": "Existed_Raid", 00:09:06.663 "uuid": "93571515-c652-4f67-b298-8f5b2928a223", 00:09:06.663 "strip_size_kb": 64, 00:09:06.663 "state": "configuring", 00:09:06.663 "raid_level": "raid0", 00:09:06.663 "superblock": true, 00:09:06.663 "num_base_bdevs": 4, 00:09:06.663 "num_base_bdevs_discovered": 2, 00:09:06.663 "num_base_bdevs_operational": 4, 00:09:06.663 "base_bdevs_list": [ 00:09:06.663 { 00:09:06.663 "name": "BaseBdev1", 00:09:06.663 "uuid": "a93b7149-ae94-4435-87cc-1d9418e7a871", 00:09:06.663 "is_configured": true, 00:09:06.663 "data_offset": 2048, 00:09:06.663 "data_size": 63488 00:09:06.663 }, 00:09:06.663 { 00:09:06.663 "name": "BaseBdev2", 00:09:06.663 "uuid": "26b9c027-3007-49d4-8036-5425a4ec43b5", 00:09:06.663 "is_configured": true, 00:09:06.663 "data_offset": 2048, 00:09:06.663 "data_size": 63488 00:09:06.663 }, 00:09:06.663 { 00:09:06.663 "name": "BaseBdev3", 00:09:06.663 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:06.663 "is_configured": false, 00:09:06.663 "data_offset": 0, 00:09:06.663 "data_size": 0 00:09:06.663 }, 00:09:06.663 { 00:09:06.663 "name": "BaseBdev4", 00:09:06.663 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:06.663 "is_configured": false, 00:09:06.663 "data_offset": 0, 00:09:06.663 "data_size": 0 00:09:06.663 } 00:09:06.663 ] 00:09:06.663 }' 00:09:06.663 18:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:06.663 18:40:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.935 18:40:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:06.935 18:40:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.935 18:40:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.935 [2024-12-15 18:40:07.265653] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:06.935 BaseBdev3 00:09:06.935 18:40:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.935 18:40:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:06.935 18:40:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:06.935 18:40:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:06.935 18:40:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:06.935 18:40:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:06.935 18:40:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:06.935 18:40:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:06.935 18:40:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.935 18:40:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.935 18:40:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.935 18:40:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:06.935 18:40:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.935 18:40:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.935 [ 00:09:06.935 { 00:09:06.935 "name": "BaseBdev3", 00:09:06.935 "aliases": [ 00:09:06.935 "1d504fb2-4e65-4637-868a-536fc7504ff2" 00:09:06.935 ], 00:09:06.935 "product_name": "Malloc disk", 00:09:06.935 "block_size": 512, 00:09:06.935 "num_blocks": 65536, 00:09:06.935 "uuid": "1d504fb2-4e65-4637-868a-536fc7504ff2", 00:09:06.935 "assigned_rate_limits": { 00:09:06.935 "rw_ios_per_sec": 0, 00:09:06.935 "rw_mbytes_per_sec": 0, 00:09:06.935 "r_mbytes_per_sec": 0, 00:09:06.935 "w_mbytes_per_sec": 0 00:09:06.935 }, 00:09:06.935 "claimed": true, 00:09:06.935 "claim_type": "exclusive_write", 00:09:06.935 "zoned": false, 00:09:06.935 "supported_io_types": { 00:09:06.935 "read": true, 00:09:06.935 "write": true, 00:09:06.935 "unmap": true, 00:09:06.935 "flush": true, 00:09:06.935 "reset": true, 00:09:06.935 "nvme_admin": false, 00:09:06.935 "nvme_io": false, 00:09:06.935 "nvme_io_md": false, 00:09:06.935 "write_zeroes": true, 00:09:06.935 "zcopy": true, 00:09:06.935 "get_zone_info": false, 00:09:06.935 "zone_management": false, 00:09:06.935 "zone_append": false, 00:09:06.935 "compare": false, 00:09:06.935 "compare_and_write": false, 00:09:06.935 "abort": true, 00:09:06.935 "seek_hole": false, 00:09:06.935 "seek_data": false, 00:09:06.935 "copy": true, 00:09:06.935 "nvme_iov_md": false 00:09:06.935 }, 00:09:06.935 "memory_domains": [ 00:09:06.935 { 00:09:06.935 "dma_device_id": "system", 00:09:06.935 "dma_device_type": 1 00:09:06.935 }, 00:09:06.935 { 00:09:06.935 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:06.935 "dma_device_type": 2 00:09:06.935 } 00:09:06.935 ], 00:09:06.935 "driver_specific": {} 00:09:06.935 } 00:09:06.935 ] 00:09:06.935 18:40:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.935 18:40:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:06.935 18:40:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:06.935 18:40:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:06.935 18:40:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:06.935 18:40:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:06.935 18:40:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:06.935 18:40:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:06.935 18:40:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:06.935 18:40:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:06.935 18:40:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:06.935 18:40:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:06.935 18:40:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:06.935 18:40:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:06.935 18:40:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:06.935 18:40:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:06.935 18:40:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.935 18:40:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.935 18:40:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.935 18:40:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:06.935 "name": "Existed_Raid", 00:09:06.935 "uuid": "93571515-c652-4f67-b298-8f5b2928a223", 00:09:06.935 "strip_size_kb": 64, 00:09:06.935 "state": "configuring", 00:09:06.935 "raid_level": "raid0", 00:09:06.935 "superblock": true, 00:09:06.935 "num_base_bdevs": 4, 00:09:06.935 "num_base_bdevs_discovered": 3, 00:09:06.935 "num_base_bdevs_operational": 4, 00:09:06.935 "base_bdevs_list": [ 00:09:06.935 { 00:09:06.935 "name": "BaseBdev1", 00:09:06.935 "uuid": "a93b7149-ae94-4435-87cc-1d9418e7a871", 00:09:06.935 "is_configured": true, 00:09:06.935 "data_offset": 2048, 00:09:06.935 "data_size": 63488 00:09:06.935 }, 00:09:06.935 { 00:09:06.935 "name": "BaseBdev2", 00:09:06.935 "uuid": "26b9c027-3007-49d4-8036-5425a4ec43b5", 00:09:06.935 "is_configured": true, 00:09:06.935 "data_offset": 2048, 00:09:06.935 "data_size": 63488 00:09:06.935 }, 00:09:06.935 { 00:09:06.935 "name": "BaseBdev3", 00:09:06.935 "uuid": "1d504fb2-4e65-4637-868a-536fc7504ff2", 00:09:06.935 "is_configured": true, 00:09:06.935 "data_offset": 2048, 00:09:06.935 "data_size": 63488 00:09:06.935 }, 00:09:06.935 { 00:09:06.935 "name": "BaseBdev4", 00:09:06.935 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:06.935 "is_configured": false, 00:09:06.935 "data_offset": 0, 00:09:06.935 "data_size": 0 00:09:06.935 } 00:09:06.935 ] 00:09:06.935 }' 00:09:06.935 18:40:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:06.935 18:40:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.536 18:40:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:09:07.536 18:40:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.536 18:40:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.536 [2024-12-15 18:40:07.764057] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:07.536 [2024-12-15 18:40:07.764369] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:09:07.536 [2024-12-15 18:40:07.764423] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:09:07.536 BaseBdev4 00:09:07.536 [2024-12-15 18:40:07.764715] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:07.536 [2024-12-15 18:40:07.764875] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:09:07.536 [2024-12-15 18:40:07.764890] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:09:07.536 [2024-12-15 18:40:07.765003] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:07.536 18:40:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.536 18:40:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:09:07.536 18:40:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:09:07.536 18:40:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:07.536 18:40:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:07.536 18:40:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:07.536 18:40:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:07.536 18:40:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:07.536 18:40:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.536 18:40:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.536 18:40:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.536 18:40:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:09:07.536 18:40:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.536 18:40:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.536 [ 00:09:07.536 { 00:09:07.536 "name": "BaseBdev4", 00:09:07.536 "aliases": [ 00:09:07.536 "a4dbd116-2563-4ffe-b9c3-5e6dc802fc18" 00:09:07.536 ], 00:09:07.536 "product_name": "Malloc disk", 00:09:07.536 "block_size": 512, 00:09:07.536 "num_blocks": 65536, 00:09:07.536 "uuid": "a4dbd116-2563-4ffe-b9c3-5e6dc802fc18", 00:09:07.536 "assigned_rate_limits": { 00:09:07.536 "rw_ios_per_sec": 0, 00:09:07.536 "rw_mbytes_per_sec": 0, 00:09:07.536 "r_mbytes_per_sec": 0, 00:09:07.536 "w_mbytes_per_sec": 0 00:09:07.536 }, 00:09:07.536 "claimed": true, 00:09:07.536 "claim_type": "exclusive_write", 00:09:07.536 "zoned": false, 00:09:07.536 "supported_io_types": { 00:09:07.536 "read": true, 00:09:07.536 "write": true, 00:09:07.536 "unmap": true, 00:09:07.536 "flush": true, 00:09:07.536 "reset": true, 00:09:07.536 "nvme_admin": false, 00:09:07.536 "nvme_io": false, 00:09:07.536 "nvme_io_md": false, 00:09:07.536 "write_zeroes": true, 00:09:07.536 "zcopy": true, 00:09:07.536 "get_zone_info": false, 00:09:07.536 "zone_management": false, 00:09:07.536 "zone_append": false, 00:09:07.536 "compare": false, 00:09:07.536 "compare_and_write": false, 00:09:07.536 "abort": true, 00:09:07.536 "seek_hole": false, 00:09:07.536 "seek_data": false, 00:09:07.536 "copy": true, 00:09:07.536 "nvme_iov_md": false 00:09:07.536 }, 00:09:07.536 "memory_domains": [ 00:09:07.536 { 00:09:07.536 "dma_device_id": "system", 00:09:07.536 "dma_device_type": 1 00:09:07.536 }, 00:09:07.536 { 00:09:07.536 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:07.536 "dma_device_type": 2 00:09:07.536 } 00:09:07.536 ], 00:09:07.536 "driver_specific": {} 00:09:07.536 } 00:09:07.536 ] 00:09:07.536 18:40:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.536 18:40:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:07.536 18:40:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:07.536 18:40:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:07.536 18:40:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:09:07.536 18:40:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:07.536 18:40:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:07.536 18:40:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:07.536 18:40:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:07.536 18:40:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:07.536 18:40:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:07.536 18:40:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:07.536 18:40:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:07.536 18:40:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:07.536 18:40:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:07.536 18:40:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.536 18:40:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.536 18:40:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:07.536 18:40:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.536 18:40:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:07.536 "name": "Existed_Raid", 00:09:07.536 "uuid": "93571515-c652-4f67-b298-8f5b2928a223", 00:09:07.536 "strip_size_kb": 64, 00:09:07.536 "state": "online", 00:09:07.536 "raid_level": "raid0", 00:09:07.536 "superblock": true, 00:09:07.536 "num_base_bdevs": 4, 00:09:07.536 "num_base_bdevs_discovered": 4, 00:09:07.536 "num_base_bdevs_operational": 4, 00:09:07.536 "base_bdevs_list": [ 00:09:07.536 { 00:09:07.536 "name": "BaseBdev1", 00:09:07.536 "uuid": "a93b7149-ae94-4435-87cc-1d9418e7a871", 00:09:07.536 "is_configured": true, 00:09:07.536 "data_offset": 2048, 00:09:07.536 "data_size": 63488 00:09:07.536 }, 00:09:07.536 { 00:09:07.536 "name": "BaseBdev2", 00:09:07.536 "uuid": "26b9c027-3007-49d4-8036-5425a4ec43b5", 00:09:07.536 "is_configured": true, 00:09:07.536 "data_offset": 2048, 00:09:07.536 "data_size": 63488 00:09:07.536 }, 00:09:07.536 { 00:09:07.536 "name": "BaseBdev3", 00:09:07.536 "uuid": "1d504fb2-4e65-4637-868a-536fc7504ff2", 00:09:07.536 "is_configured": true, 00:09:07.536 "data_offset": 2048, 00:09:07.536 "data_size": 63488 00:09:07.536 }, 00:09:07.536 { 00:09:07.536 "name": "BaseBdev4", 00:09:07.536 "uuid": "a4dbd116-2563-4ffe-b9c3-5e6dc802fc18", 00:09:07.536 "is_configured": true, 00:09:07.536 "data_offset": 2048, 00:09:07.536 "data_size": 63488 00:09:07.536 } 00:09:07.536 ] 00:09:07.536 }' 00:09:07.536 18:40:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:07.536 18:40:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.107 18:40:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:08.107 18:40:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:08.107 18:40:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:08.107 18:40:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:08.107 18:40:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:08.107 18:40:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:08.107 18:40:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:08.107 18:40:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:08.107 18:40:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.107 18:40:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.107 [2024-12-15 18:40:08.271592] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:08.107 18:40:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.107 18:40:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:08.107 "name": "Existed_Raid", 00:09:08.107 "aliases": [ 00:09:08.107 "93571515-c652-4f67-b298-8f5b2928a223" 00:09:08.107 ], 00:09:08.107 "product_name": "Raid Volume", 00:09:08.107 "block_size": 512, 00:09:08.107 "num_blocks": 253952, 00:09:08.107 "uuid": "93571515-c652-4f67-b298-8f5b2928a223", 00:09:08.107 "assigned_rate_limits": { 00:09:08.107 "rw_ios_per_sec": 0, 00:09:08.107 "rw_mbytes_per_sec": 0, 00:09:08.107 "r_mbytes_per_sec": 0, 00:09:08.107 "w_mbytes_per_sec": 0 00:09:08.107 }, 00:09:08.107 "claimed": false, 00:09:08.107 "zoned": false, 00:09:08.107 "supported_io_types": { 00:09:08.107 "read": true, 00:09:08.107 "write": true, 00:09:08.107 "unmap": true, 00:09:08.107 "flush": true, 00:09:08.107 "reset": true, 00:09:08.107 "nvme_admin": false, 00:09:08.107 "nvme_io": false, 00:09:08.107 "nvme_io_md": false, 00:09:08.107 "write_zeroes": true, 00:09:08.107 "zcopy": false, 00:09:08.107 "get_zone_info": false, 00:09:08.107 "zone_management": false, 00:09:08.107 "zone_append": false, 00:09:08.107 "compare": false, 00:09:08.107 "compare_and_write": false, 00:09:08.107 "abort": false, 00:09:08.107 "seek_hole": false, 00:09:08.107 "seek_data": false, 00:09:08.107 "copy": false, 00:09:08.107 "nvme_iov_md": false 00:09:08.107 }, 00:09:08.107 "memory_domains": [ 00:09:08.107 { 00:09:08.107 "dma_device_id": "system", 00:09:08.107 "dma_device_type": 1 00:09:08.107 }, 00:09:08.107 { 00:09:08.107 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:08.107 "dma_device_type": 2 00:09:08.107 }, 00:09:08.107 { 00:09:08.107 "dma_device_id": "system", 00:09:08.107 "dma_device_type": 1 00:09:08.107 }, 00:09:08.107 { 00:09:08.107 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:08.107 "dma_device_type": 2 00:09:08.107 }, 00:09:08.107 { 00:09:08.107 "dma_device_id": "system", 00:09:08.107 "dma_device_type": 1 00:09:08.107 }, 00:09:08.107 { 00:09:08.107 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:08.107 "dma_device_type": 2 00:09:08.107 }, 00:09:08.107 { 00:09:08.107 "dma_device_id": "system", 00:09:08.107 "dma_device_type": 1 00:09:08.107 }, 00:09:08.107 { 00:09:08.107 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:08.107 "dma_device_type": 2 00:09:08.107 } 00:09:08.107 ], 00:09:08.107 "driver_specific": { 00:09:08.107 "raid": { 00:09:08.107 "uuid": "93571515-c652-4f67-b298-8f5b2928a223", 00:09:08.107 "strip_size_kb": 64, 00:09:08.107 "state": "online", 00:09:08.107 "raid_level": "raid0", 00:09:08.107 "superblock": true, 00:09:08.107 "num_base_bdevs": 4, 00:09:08.107 "num_base_bdevs_discovered": 4, 00:09:08.107 "num_base_bdevs_operational": 4, 00:09:08.107 "base_bdevs_list": [ 00:09:08.107 { 00:09:08.107 "name": "BaseBdev1", 00:09:08.107 "uuid": "a93b7149-ae94-4435-87cc-1d9418e7a871", 00:09:08.107 "is_configured": true, 00:09:08.107 "data_offset": 2048, 00:09:08.107 "data_size": 63488 00:09:08.107 }, 00:09:08.107 { 00:09:08.107 "name": "BaseBdev2", 00:09:08.107 "uuid": "26b9c027-3007-49d4-8036-5425a4ec43b5", 00:09:08.107 "is_configured": true, 00:09:08.107 "data_offset": 2048, 00:09:08.107 "data_size": 63488 00:09:08.107 }, 00:09:08.107 { 00:09:08.107 "name": "BaseBdev3", 00:09:08.107 "uuid": "1d504fb2-4e65-4637-868a-536fc7504ff2", 00:09:08.107 "is_configured": true, 00:09:08.107 "data_offset": 2048, 00:09:08.107 "data_size": 63488 00:09:08.107 }, 00:09:08.107 { 00:09:08.107 "name": "BaseBdev4", 00:09:08.107 "uuid": "a4dbd116-2563-4ffe-b9c3-5e6dc802fc18", 00:09:08.107 "is_configured": true, 00:09:08.107 "data_offset": 2048, 00:09:08.107 "data_size": 63488 00:09:08.107 } 00:09:08.107 ] 00:09:08.107 } 00:09:08.107 } 00:09:08.107 }' 00:09:08.107 18:40:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:08.107 18:40:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:08.107 BaseBdev2 00:09:08.107 BaseBdev3 00:09:08.107 BaseBdev4' 00:09:08.107 18:40:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:08.107 18:40:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:08.107 18:40:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:08.107 18:40:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:08.107 18:40:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.107 18:40:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.107 18:40:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:08.107 18:40:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.107 18:40:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:08.107 18:40:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:08.107 18:40:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:08.107 18:40:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:08.107 18:40:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:08.107 18:40:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.107 18:40:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.107 18:40:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.107 18:40:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:08.107 18:40:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:08.107 18:40:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:08.107 18:40:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:08.107 18:40:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.107 18:40:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:08.108 18:40:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.108 18:40:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.108 18:40:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:08.108 18:40:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:08.108 18:40:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:08.108 18:40:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:09:08.108 18:40:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.108 18:40:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.367 18:40:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:08.367 18:40:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.367 18:40:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:08.367 18:40:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:08.367 18:40:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:08.367 18:40:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.367 18:40:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.367 [2024-12-15 18:40:08.602780] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:08.367 [2024-12-15 18:40:08.602884] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:08.367 [2024-12-15 18:40:08.602967] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:08.367 18:40:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.367 18:40:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:08.367 18:40:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:09:08.367 18:40:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:08.367 18:40:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:09:08.367 18:40:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:08.367 18:40:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:09:08.367 18:40:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:08.367 18:40:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:08.367 18:40:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:08.367 18:40:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:08.367 18:40:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:08.367 18:40:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:08.367 18:40:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:08.367 18:40:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:08.367 18:40:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:08.367 18:40:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:08.367 18:40:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.367 18:40:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:08.367 18:40:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.367 18:40:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.367 18:40:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:08.367 "name": "Existed_Raid", 00:09:08.367 "uuid": "93571515-c652-4f67-b298-8f5b2928a223", 00:09:08.367 "strip_size_kb": 64, 00:09:08.367 "state": "offline", 00:09:08.367 "raid_level": "raid0", 00:09:08.367 "superblock": true, 00:09:08.367 "num_base_bdevs": 4, 00:09:08.367 "num_base_bdevs_discovered": 3, 00:09:08.367 "num_base_bdevs_operational": 3, 00:09:08.367 "base_bdevs_list": [ 00:09:08.367 { 00:09:08.367 "name": null, 00:09:08.367 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:08.367 "is_configured": false, 00:09:08.367 "data_offset": 0, 00:09:08.367 "data_size": 63488 00:09:08.367 }, 00:09:08.367 { 00:09:08.367 "name": "BaseBdev2", 00:09:08.367 "uuid": "26b9c027-3007-49d4-8036-5425a4ec43b5", 00:09:08.367 "is_configured": true, 00:09:08.367 "data_offset": 2048, 00:09:08.367 "data_size": 63488 00:09:08.367 }, 00:09:08.367 { 00:09:08.367 "name": "BaseBdev3", 00:09:08.367 "uuid": "1d504fb2-4e65-4637-868a-536fc7504ff2", 00:09:08.367 "is_configured": true, 00:09:08.367 "data_offset": 2048, 00:09:08.367 "data_size": 63488 00:09:08.367 }, 00:09:08.367 { 00:09:08.367 "name": "BaseBdev4", 00:09:08.367 "uuid": "a4dbd116-2563-4ffe-b9c3-5e6dc802fc18", 00:09:08.367 "is_configured": true, 00:09:08.367 "data_offset": 2048, 00:09:08.367 "data_size": 63488 00:09:08.367 } 00:09:08.367 ] 00:09:08.367 }' 00:09:08.367 18:40:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:08.367 18:40:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.627 18:40:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:08.627 18:40:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:08.627 18:40:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:08.627 18:40:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:08.627 18:40:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.627 18:40:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.887 18:40:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.887 18:40:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:08.887 18:40:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:08.887 18:40:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:08.887 18:40:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.887 18:40:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.887 [2024-12-15 18:40:09.101545] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:08.887 18:40:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.887 18:40:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:08.887 18:40:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:08.887 18:40:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:08.887 18:40:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:08.887 18:40:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.887 18:40:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.887 18:40:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.887 18:40:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:08.887 18:40:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:08.887 18:40:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:08.887 18:40:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.887 18:40:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.887 [2024-12-15 18:40:09.173058] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:08.887 18:40:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.887 18:40:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:08.887 18:40:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:08.887 18:40:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:08.887 18:40:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:08.887 18:40:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.887 18:40:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.887 18:40:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.887 18:40:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:08.887 18:40:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:08.887 18:40:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:09:08.887 18:40:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.887 18:40:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.887 [2024-12-15 18:40:09.244301] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:09:08.887 [2024-12-15 18:40:09.244405] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:09:08.887 18:40:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.887 18:40:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:08.887 18:40:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:08.887 18:40:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:08.887 18:40:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:08.887 18:40:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.887 18:40:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.887 18:40:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.887 18:40:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:08.887 18:40:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:08.887 18:40:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:09:08.887 18:40:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:08.887 18:40:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:08.887 18:40:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:08.887 18:40:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.887 18:40:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.887 BaseBdev2 00:09:08.887 18:40:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.887 18:40:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:08.887 18:40:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:08.887 18:40:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:08.887 18:40:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:08.887 18:40:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:08.887 18:40:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:08.887 18:40:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:08.887 18:40:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.887 18:40:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.148 18:40:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.148 18:40:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:09.148 18:40:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.148 18:40:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.148 [ 00:09:09.148 { 00:09:09.148 "name": "BaseBdev2", 00:09:09.148 "aliases": [ 00:09:09.148 "608a5176-0a36-4b1c-b4e7-9a9eeea2fce5" 00:09:09.148 ], 00:09:09.148 "product_name": "Malloc disk", 00:09:09.148 "block_size": 512, 00:09:09.148 "num_blocks": 65536, 00:09:09.148 "uuid": "608a5176-0a36-4b1c-b4e7-9a9eeea2fce5", 00:09:09.148 "assigned_rate_limits": { 00:09:09.148 "rw_ios_per_sec": 0, 00:09:09.148 "rw_mbytes_per_sec": 0, 00:09:09.148 "r_mbytes_per_sec": 0, 00:09:09.148 "w_mbytes_per_sec": 0 00:09:09.148 }, 00:09:09.148 "claimed": false, 00:09:09.148 "zoned": false, 00:09:09.148 "supported_io_types": { 00:09:09.148 "read": true, 00:09:09.148 "write": true, 00:09:09.148 "unmap": true, 00:09:09.148 "flush": true, 00:09:09.148 "reset": true, 00:09:09.148 "nvme_admin": false, 00:09:09.148 "nvme_io": false, 00:09:09.148 "nvme_io_md": false, 00:09:09.148 "write_zeroes": true, 00:09:09.148 "zcopy": true, 00:09:09.148 "get_zone_info": false, 00:09:09.148 "zone_management": false, 00:09:09.148 "zone_append": false, 00:09:09.148 "compare": false, 00:09:09.148 "compare_and_write": false, 00:09:09.148 "abort": true, 00:09:09.148 "seek_hole": false, 00:09:09.148 "seek_data": false, 00:09:09.148 "copy": true, 00:09:09.148 "nvme_iov_md": false 00:09:09.148 }, 00:09:09.148 "memory_domains": [ 00:09:09.148 { 00:09:09.148 "dma_device_id": "system", 00:09:09.148 "dma_device_type": 1 00:09:09.148 }, 00:09:09.148 { 00:09:09.148 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:09.148 "dma_device_type": 2 00:09:09.148 } 00:09:09.148 ], 00:09:09.148 "driver_specific": {} 00:09:09.148 } 00:09:09.148 ] 00:09:09.148 18:40:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.148 18:40:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:09.148 18:40:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:09.148 18:40:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:09.148 18:40:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:09.148 18:40:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.148 18:40:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.148 BaseBdev3 00:09:09.148 18:40:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.148 18:40:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:09.148 18:40:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:09.148 18:40:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:09.148 18:40:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:09.148 18:40:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:09.148 18:40:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:09.148 18:40:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:09.148 18:40:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.148 18:40:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.148 18:40:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.148 18:40:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:09.148 18:40:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.148 18:40:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.148 [ 00:09:09.148 { 00:09:09.148 "name": "BaseBdev3", 00:09:09.148 "aliases": [ 00:09:09.148 "aac88605-58fa-460b-9e13-59d958bc86f4" 00:09:09.148 ], 00:09:09.148 "product_name": "Malloc disk", 00:09:09.148 "block_size": 512, 00:09:09.148 "num_blocks": 65536, 00:09:09.148 "uuid": "aac88605-58fa-460b-9e13-59d958bc86f4", 00:09:09.148 "assigned_rate_limits": { 00:09:09.148 "rw_ios_per_sec": 0, 00:09:09.148 "rw_mbytes_per_sec": 0, 00:09:09.148 "r_mbytes_per_sec": 0, 00:09:09.148 "w_mbytes_per_sec": 0 00:09:09.148 }, 00:09:09.148 "claimed": false, 00:09:09.148 "zoned": false, 00:09:09.148 "supported_io_types": { 00:09:09.148 "read": true, 00:09:09.148 "write": true, 00:09:09.148 "unmap": true, 00:09:09.148 "flush": true, 00:09:09.148 "reset": true, 00:09:09.148 "nvme_admin": false, 00:09:09.148 "nvme_io": false, 00:09:09.148 "nvme_io_md": false, 00:09:09.148 "write_zeroes": true, 00:09:09.148 "zcopy": true, 00:09:09.149 "get_zone_info": false, 00:09:09.149 "zone_management": false, 00:09:09.149 "zone_append": false, 00:09:09.149 "compare": false, 00:09:09.149 "compare_and_write": false, 00:09:09.149 "abort": true, 00:09:09.149 "seek_hole": false, 00:09:09.149 "seek_data": false, 00:09:09.149 "copy": true, 00:09:09.149 "nvme_iov_md": false 00:09:09.149 }, 00:09:09.149 "memory_domains": [ 00:09:09.149 { 00:09:09.149 "dma_device_id": "system", 00:09:09.149 "dma_device_type": 1 00:09:09.149 }, 00:09:09.149 { 00:09:09.149 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:09.149 "dma_device_type": 2 00:09:09.149 } 00:09:09.149 ], 00:09:09.149 "driver_specific": {} 00:09:09.149 } 00:09:09.149 ] 00:09:09.149 18:40:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.149 18:40:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:09.149 18:40:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:09.149 18:40:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:09.149 18:40:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:09:09.149 18:40:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.149 18:40:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.149 BaseBdev4 00:09:09.149 18:40:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.149 18:40:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:09:09.149 18:40:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:09:09.149 18:40:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:09.149 18:40:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:09.149 18:40:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:09.149 18:40:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:09.149 18:40:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:09.149 18:40:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.149 18:40:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.149 18:40:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.149 18:40:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:09:09.149 18:40:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.149 18:40:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.149 [ 00:09:09.149 { 00:09:09.149 "name": "BaseBdev4", 00:09:09.149 "aliases": [ 00:09:09.149 "1a26e20f-3e62-430e-88ec-be925848775e" 00:09:09.149 ], 00:09:09.149 "product_name": "Malloc disk", 00:09:09.149 "block_size": 512, 00:09:09.149 "num_blocks": 65536, 00:09:09.149 "uuid": "1a26e20f-3e62-430e-88ec-be925848775e", 00:09:09.149 "assigned_rate_limits": { 00:09:09.149 "rw_ios_per_sec": 0, 00:09:09.149 "rw_mbytes_per_sec": 0, 00:09:09.149 "r_mbytes_per_sec": 0, 00:09:09.149 "w_mbytes_per_sec": 0 00:09:09.149 }, 00:09:09.149 "claimed": false, 00:09:09.149 "zoned": false, 00:09:09.149 "supported_io_types": { 00:09:09.149 "read": true, 00:09:09.149 "write": true, 00:09:09.149 "unmap": true, 00:09:09.149 "flush": true, 00:09:09.149 "reset": true, 00:09:09.149 "nvme_admin": false, 00:09:09.149 "nvme_io": false, 00:09:09.149 "nvme_io_md": false, 00:09:09.149 "write_zeroes": true, 00:09:09.149 "zcopy": true, 00:09:09.149 "get_zone_info": false, 00:09:09.149 "zone_management": false, 00:09:09.149 "zone_append": false, 00:09:09.149 "compare": false, 00:09:09.149 "compare_and_write": false, 00:09:09.149 "abort": true, 00:09:09.149 "seek_hole": false, 00:09:09.149 "seek_data": false, 00:09:09.149 "copy": true, 00:09:09.149 "nvme_iov_md": false 00:09:09.149 }, 00:09:09.149 "memory_domains": [ 00:09:09.149 { 00:09:09.149 "dma_device_id": "system", 00:09:09.149 "dma_device_type": 1 00:09:09.149 }, 00:09:09.149 { 00:09:09.149 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:09.149 "dma_device_type": 2 00:09:09.149 } 00:09:09.149 ], 00:09:09.149 "driver_specific": {} 00:09:09.149 } 00:09:09.149 ] 00:09:09.149 18:40:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.149 18:40:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:09.149 18:40:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:09.149 18:40:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:09.149 18:40:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:09.149 18:40:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.149 18:40:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.149 [2024-12-15 18:40:09.469612] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:09.149 [2024-12-15 18:40:09.469723] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:09.149 [2024-12-15 18:40:09.469784] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:09.149 [2024-12-15 18:40:09.471712] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:09.149 [2024-12-15 18:40:09.471814] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:09.149 18:40:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.149 18:40:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:09.149 18:40:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:09.149 18:40:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:09.149 18:40:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:09.149 18:40:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:09.149 18:40:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:09.149 18:40:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:09.149 18:40:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:09.149 18:40:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:09.149 18:40:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:09.149 18:40:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.149 18:40:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:09.149 18:40:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.149 18:40:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.149 18:40:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.149 18:40:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:09.149 "name": "Existed_Raid", 00:09:09.149 "uuid": "d4e1bee9-1a51-4de7-9e5b-f6a419bfc0f6", 00:09:09.149 "strip_size_kb": 64, 00:09:09.149 "state": "configuring", 00:09:09.149 "raid_level": "raid0", 00:09:09.149 "superblock": true, 00:09:09.149 "num_base_bdevs": 4, 00:09:09.149 "num_base_bdevs_discovered": 3, 00:09:09.149 "num_base_bdevs_operational": 4, 00:09:09.149 "base_bdevs_list": [ 00:09:09.149 { 00:09:09.149 "name": "BaseBdev1", 00:09:09.149 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:09.149 "is_configured": false, 00:09:09.149 "data_offset": 0, 00:09:09.149 "data_size": 0 00:09:09.149 }, 00:09:09.149 { 00:09:09.149 "name": "BaseBdev2", 00:09:09.149 "uuid": "608a5176-0a36-4b1c-b4e7-9a9eeea2fce5", 00:09:09.149 "is_configured": true, 00:09:09.149 "data_offset": 2048, 00:09:09.149 "data_size": 63488 00:09:09.149 }, 00:09:09.149 { 00:09:09.149 "name": "BaseBdev3", 00:09:09.149 "uuid": "aac88605-58fa-460b-9e13-59d958bc86f4", 00:09:09.149 "is_configured": true, 00:09:09.149 "data_offset": 2048, 00:09:09.149 "data_size": 63488 00:09:09.149 }, 00:09:09.149 { 00:09:09.149 "name": "BaseBdev4", 00:09:09.149 "uuid": "1a26e20f-3e62-430e-88ec-be925848775e", 00:09:09.149 "is_configured": true, 00:09:09.149 "data_offset": 2048, 00:09:09.149 "data_size": 63488 00:09:09.149 } 00:09:09.149 ] 00:09:09.149 }' 00:09:09.149 18:40:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:09.149 18:40:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.718 18:40:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:09.718 18:40:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.718 18:40:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.718 [2024-12-15 18:40:09.920827] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:09.718 18:40:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.718 18:40:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:09.718 18:40:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:09.718 18:40:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:09.718 18:40:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:09.718 18:40:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:09.718 18:40:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:09.718 18:40:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:09.718 18:40:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:09.718 18:40:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:09.718 18:40:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:09.718 18:40:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.718 18:40:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.718 18:40:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.718 18:40:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:09.718 18:40:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.718 18:40:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:09.718 "name": "Existed_Raid", 00:09:09.718 "uuid": "d4e1bee9-1a51-4de7-9e5b-f6a419bfc0f6", 00:09:09.718 "strip_size_kb": 64, 00:09:09.718 "state": "configuring", 00:09:09.718 "raid_level": "raid0", 00:09:09.718 "superblock": true, 00:09:09.718 "num_base_bdevs": 4, 00:09:09.718 "num_base_bdevs_discovered": 2, 00:09:09.718 "num_base_bdevs_operational": 4, 00:09:09.718 "base_bdevs_list": [ 00:09:09.718 { 00:09:09.718 "name": "BaseBdev1", 00:09:09.718 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:09.718 "is_configured": false, 00:09:09.718 "data_offset": 0, 00:09:09.718 "data_size": 0 00:09:09.718 }, 00:09:09.718 { 00:09:09.718 "name": null, 00:09:09.718 "uuid": "608a5176-0a36-4b1c-b4e7-9a9eeea2fce5", 00:09:09.718 "is_configured": false, 00:09:09.718 "data_offset": 0, 00:09:09.718 "data_size": 63488 00:09:09.718 }, 00:09:09.718 { 00:09:09.718 "name": "BaseBdev3", 00:09:09.718 "uuid": "aac88605-58fa-460b-9e13-59d958bc86f4", 00:09:09.718 "is_configured": true, 00:09:09.718 "data_offset": 2048, 00:09:09.718 "data_size": 63488 00:09:09.718 }, 00:09:09.718 { 00:09:09.718 "name": "BaseBdev4", 00:09:09.718 "uuid": "1a26e20f-3e62-430e-88ec-be925848775e", 00:09:09.718 "is_configured": true, 00:09:09.718 "data_offset": 2048, 00:09:09.718 "data_size": 63488 00:09:09.718 } 00:09:09.718 ] 00:09:09.718 }' 00:09:09.718 18:40:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:09.718 18:40:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.977 18:40:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.977 18:40:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.977 18:40:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.977 18:40:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:09.977 18:40:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.236 18:40:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:10.236 18:40:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:10.236 18:40:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.236 18:40:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.236 [2024-12-15 18:40:10.430993] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:10.236 BaseBdev1 00:09:10.236 18:40:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.236 18:40:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:10.236 18:40:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:10.236 18:40:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:10.236 18:40:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:10.236 18:40:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:10.236 18:40:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:10.236 18:40:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:10.236 18:40:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.236 18:40:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.236 18:40:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.236 18:40:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:10.236 18:40:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.236 18:40:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.236 [ 00:09:10.236 { 00:09:10.236 "name": "BaseBdev1", 00:09:10.236 "aliases": [ 00:09:10.236 "d160bc8b-cde0-4108-bfdf-89deac697c88" 00:09:10.236 ], 00:09:10.236 "product_name": "Malloc disk", 00:09:10.236 "block_size": 512, 00:09:10.236 "num_blocks": 65536, 00:09:10.236 "uuid": "d160bc8b-cde0-4108-bfdf-89deac697c88", 00:09:10.236 "assigned_rate_limits": { 00:09:10.236 "rw_ios_per_sec": 0, 00:09:10.236 "rw_mbytes_per_sec": 0, 00:09:10.236 "r_mbytes_per_sec": 0, 00:09:10.236 "w_mbytes_per_sec": 0 00:09:10.236 }, 00:09:10.236 "claimed": true, 00:09:10.236 "claim_type": "exclusive_write", 00:09:10.236 "zoned": false, 00:09:10.236 "supported_io_types": { 00:09:10.236 "read": true, 00:09:10.236 "write": true, 00:09:10.236 "unmap": true, 00:09:10.236 "flush": true, 00:09:10.236 "reset": true, 00:09:10.236 "nvme_admin": false, 00:09:10.236 "nvme_io": false, 00:09:10.236 "nvme_io_md": false, 00:09:10.236 "write_zeroes": true, 00:09:10.237 "zcopy": true, 00:09:10.237 "get_zone_info": false, 00:09:10.237 "zone_management": false, 00:09:10.237 "zone_append": false, 00:09:10.237 "compare": false, 00:09:10.237 "compare_and_write": false, 00:09:10.237 "abort": true, 00:09:10.237 "seek_hole": false, 00:09:10.237 "seek_data": false, 00:09:10.237 "copy": true, 00:09:10.237 "nvme_iov_md": false 00:09:10.237 }, 00:09:10.237 "memory_domains": [ 00:09:10.237 { 00:09:10.237 "dma_device_id": "system", 00:09:10.237 "dma_device_type": 1 00:09:10.237 }, 00:09:10.237 { 00:09:10.237 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:10.237 "dma_device_type": 2 00:09:10.237 } 00:09:10.237 ], 00:09:10.237 "driver_specific": {} 00:09:10.237 } 00:09:10.237 ] 00:09:10.237 18:40:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.237 18:40:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:10.237 18:40:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:10.237 18:40:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:10.237 18:40:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:10.237 18:40:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:10.237 18:40:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:10.237 18:40:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:10.237 18:40:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:10.237 18:40:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:10.237 18:40:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:10.237 18:40:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:10.237 18:40:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.237 18:40:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:10.237 18:40:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.237 18:40:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.237 18:40:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.237 18:40:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:10.237 "name": "Existed_Raid", 00:09:10.237 "uuid": "d4e1bee9-1a51-4de7-9e5b-f6a419bfc0f6", 00:09:10.237 "strip_size_kb": 64, 00:09:10.237 "state": "configuring", 00:09:10.237 "raid_level": "raid0", 00:09:10.237 "superblock": true, 00:09:10.237 "num_base_bdevs": 4, 00:09:10.237 "num_base_bdevs_discovered": 3, 00:09:10.237 "num_base_bdevs_operational": 4, 00:09:10.237 "base_bdevs_list": [ 00:09:10.237 { 00:09:10.237 "name": "BaseBdev1", 00:09:10.237 "uuid": "d160bc8b-cde0-4108-bfdf-89deac697c88", 00:09:10.237 "is_configured": true, 00:09:10.237 "data_offset": 2048, 00:09:10.237 "data_size": 63488 00:09:10.237 }, 00:09:10.237 { 00:09:10.237 "name": null, 00:09:10.237 "uuid": "608a5176-0a36-4b1c-b4e7-9a9eeea2fce5", 00:09:10.237 "is_configured": false, 00:09:10.237 "data_offset": 0, 00:09:10.237 "data_size": 63488 00:09:10.237 }, 00:09:10.237 { 00:09:10.237 "name": "BaseBdev3", 00:09:10.237 "uuid": "aac88605-58fa-460b-9e13-59d958bc86f4", 00:09:10.237 "is_configured": true, 00:09:10.237 "data_offset": 2048, 00:09:10.237 "data_size": 63488 00:09:10.237 }, 00:09:10.237 { 00:09:10.237 "name": "BaseBdev4", 00:09:10.237 "uuid": "1a26e20f-3e62-430e-88ec-be925848775e", 00:09:10.237 "is_configured": true, 00:09:10.237 "data_offset": 2048, 00:09:10.237 "data_size": 63488 00:09:10.237 } 00:09:10.237 ] 00:09:10.237 }' 00:09:10.237 18:40:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:10.237 18:40:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.496 18:40:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.496 18:40:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:10.496 18:40:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.496 18:40:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.496 18:40:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.756 18:40:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:10.756 18:40:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:10.756 18:40:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.756 18:40:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.756 [2024-12-15 18:40:10.950178] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:10.756 18:40:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.756 18:40:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:10.756 18:40:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:10.756 18:40:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:10.756 18:40:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:10.756 18:40:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:10.756 18:40:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:10.756 18:40:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:10.756 18:40:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:10.756 18:40:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:10.756 18:40:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:10.756 18:40:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.756 18:40:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:10.756 18:40:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.756 18:40:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.756 18:40:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.756 18:40:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:10.756 "name": "Existed_Raid", 00:09:10.756 "uuid": "d4e1bee9-1a51-4de7-9e5b-f6a419bfc0f6", 00:09:10.756 "strip_size_kb": 64, 00:09:10.756 "state": "configuring", 00:09:10.756 "raid_level": "raid0", 00:09:10.756 "superblock": true, 00:09:10.756 "num_base_bdevs": 4, 00:09:10.756 "num_base_bdevs_discovered": 2, 00:09:10.756 "num_base_bdevs_operational": 4, 00:09:10.756 "base_bdevs_list": [ 00:09:10.756 { 00:09:10.757 "name": "BaseBdev1", 00:09:10.757 "uuid": "d160bc8b-cde0-4108-bfdf-89deac697c88", 00:09:10.757 "is_configured": true, 00:09:10.757 "data_offset": 2048, 00:09:10.757 "data_size": 63488 00:09:10.757 }, 00:09:10.757 { 00:09:10.757 "name": null, 00:09:10.757 "uuid": "608a5176-0a36-4b1c-b4e7-9a9eeea2fce5", 00:09:10.757 "is_configured": false, 00:09:10.757 "data_offset": 0, 00:09:10.757 "data_size": 63488 00:09:10.757 }, 00:09:10.757 { 00:09:10.757 "name": null, 00:09:10.757 "uuid": "aac88605-58fa-460b-9e13-59d958bc86f4", 00:09:10.757 "is_configured": false, 00:09:10.757 "data_offset": 0, 00:09:10.757 "data_size": 63488 00:09:10.757 }, 00:09:10.757 { 00:09:10.757 "name": "BaseBdev4", 00:09:10.757 "uuid": "1a26e20f-3e62-430e-88ec-be925848775e", 00:09:10.757 "is_configured": true, 00:09:10.757 "data_offset": 2048, 00:09:10.757 "data_size": 63488 00:09:10.757 } 00:09:10.757 ] 00:09:10.757 }' 00:09:10.757 18:40:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:10.757 18:40:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.017 18:40:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.017 18:40:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.017 18:40:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.017 18:40:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:11.277 18:40:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.277 18:40:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:11.277 18:40:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:11.277 18:40:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.277 18:40:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.277 [2024-12-15 18:40:11.505255] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:11.277 18:40:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.277 18:40:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:11.277 18:40:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:11.277 18:40:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:11.277 18:40:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:11.277 18:40:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:11.277 18:40:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:11.277 18:40:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:11.277 18:40:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:11.277 18:40:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:11.277 18:40:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:11.277 18:40:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.277 18:40:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.277 18:40:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.277 18:40:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:11.277 18:40:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.277 18:40:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:11.277 "name": "Existed_Raid", 00:09:11.277 "uuid": "d4e1bee9-1a51-4de7-9e5b-f6a419bfc0f6", 00:09:11.277 "strip_size_kb": 64, 00:09:11.277 "state": "configuring", 00:09:11.277 "raid_level": "raid0", 00:09:11.277 "superblock": true, 00:09:11.277 "num_base_bdevs": 4, 00:09:11.277 "num_base_bdevs_discovered": 3, 00:09:11.277 "num_base_bdevs_operational": 4, 00:09:11.277 "base_bdevs_list": [ 00:09:11.277 { 00:09:11.277 "name": "BaseBdev1", 00:09:11.277 "uuid": "d160bc8b-cde0-4108-bfdf-89deac697c88", 00:09:11.277 "is_configured": true, 00:09:11.277 "data_offset": 2048, 00:09:11.277 "data_size": 63488 00:09:11.277 }, 00:09:11.277 { 00:09:11.277 "name": null, 00:09:11.277 "uuid": "608a5176-0a36-4b1c-b4e7-9a9eeea2fce5", 00:09:11.277 "is_configured": false, 00:09:11.277 "data_offset": 0, 00:09:11.277 "data_size": 63488 00:09:11.277 }, 00:09:11.277 { 00:09:11.277 "name": "BaseBdev3", 00:09:11.277 "uuid": "aac88605-58fa-460b-9e13-59d958bc86f4", 00:09:11.277 "is_configured": true, 00:09:11.277 "data_offset": 2048, 00:09:11.277 "data_size": 63488 00:09:11.277 }, 00:09:11.277 { 00:09:11.277 "name": "BaseBdev4", 00:09:11.277 "uuid": "1a26e20f-3e62-430e-88ec-be925848775e", 00:09:11.277 "is_configured": true, 00:09:11.277 "data_offset": 2048, 00:09:11.277 "data_size": 63488 00:09:11.277 } 00:09:11.277 ] 00:09:11.277 }' 00:09:11.277 18:40:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:11.277 18:40:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.846 18:40:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.846 18:40:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.846 18:40:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:11.846 18:40:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.846 18:40:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.846 18:40:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:11.846 18:40:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:11.846 18:40:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.846 18:40:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.846 [2024-12-15 18:40:12.028520] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:11.846 18:40:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.846 18:40:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:11.846 18:40:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:11.846 18:40:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:11.846 18:40:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:11.846 18:40:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:11.846 18:40:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:11.846 18:40:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:11.846 18:40:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:11.846 18:40:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:11.846 18:40:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:11.846 18:40:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:11.846 18:40:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.846 18:40:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.846 18:40:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.846 18:40:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.846 18:40:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:11.846 "name": "Existed_Raid", 00:09:11.846 "uuid": "d4e1bee9-1a51-4de7-9e5b-f6a419bfc0f6", 00:09:11.846 "strip_size_kb": 64, 00:09:11.846 "state": "configuring", 00:09:11.846 "raid_level": "raid0", 00:09:11.846 "superblock": true, 00:09:11.846 "num_base_bdevs": 4, 00:09:11.846 "num_base_bdevs_discovered": 2, 00:09:11.846 "num_base_bdevs_operational": 4, 00:09:11.846 "base_bdevs_list": [ 00:09:11.846 { 00:09:11.846 "name": null, 00:09:11.846 "uuid": "d160bc8b-cde0-4108-bfdf-89deac697c88", 00:09:11.846 "is_configured": false, 00:09:11.846 "data_offset": 0, 00:09:11.846 "data_size": 63488 00:09:11.846 }, 00:09:11.846 { 00:09:11.846 "name": null, 00:09:11.846 "uuid": "608a5176-0a36-4b1c-b4e7-9a9eeea2fce5", 00:09:11.846 "is_configured": false, 00:09:11.846 "data_offset": 0, 00:09:11.846 "data_size": 63488 00:09:11.846 }, 00:09:11.846 { 00:09:11.846 "name": "BaseBdev3", 00:09:11.846 "uuid": "aac88605-58fa-460b-9e13-59d958bc86f4", 00:09:11.846 "is_configured": true, 00:09:11.846 "data_offset": 2048, 00:09:11.846 "data_size": 63488 00:09:11.846 }, 00:09:11.846 { 00:09:11.846 "name": "BaseBdev4", 00:09:11.846 "uuid": "1a26e20f-3e62-430e-88ec-be925848775e", 00:09:11.846 "is_configured": true, 00:09:11.846 "data_offset": 2048, 00:09:11.846 "data_size": 63488 00:09:11.846 } 00:09:11.846 ] 00:09:11.846 }' 00:09:11.846 18:40:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:11.846 18:40:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.106 18:40:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.106 18:40:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:12.106 18:40:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.106 18:40:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.106 18:40:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.365 18:40:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:12.365 18:40:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:12.365 18:40:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.365 18:40:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.365 [2024-12-15 18:40:12.562544] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:12.366 18:40:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.366 18:40:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:12.366 18:40:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:12.366 18:40:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:12.366 18:40:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:12.366 18:40:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:12.366 18:40:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:12.366 18:40:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:12.366 18:40:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:12.366 18:40:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:12.366 18:40:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:12.366 18:40:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:12.366 18:40:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.366 18:40:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.366 18:40:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.366 18:40:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.366 18:40:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:12.366 "name": "Existed_Raid", 00:09:12.366 "uuid": "d4e1bee9-1a51-4de7-9e5b-f6a419bfc0f6", 00:09:12.366 "strip_size_kb": 64, 00:09:12.366 "state": "configuring", 00:09:12.366 "raid_level": "raid0", 00:09:12.366 "superblock": true, 00:09:12.366 "num_base_bdevs": 4, 00:09:12.366 "num_base_bdevs_discovered": 3, 00:09:12.366 "num_base_bdevs_operational": 4, 00:09:12.366 "base_bdevs_list": [ 00:09:12.366 { 00:09:12.366 "name": null, 00:09:12.366 "uuid": "d160bc8b-cde0-4108-bfdf-89deac697c88", 00:09:12.366 "is_configured": false, 00:09:12.366 "data_offset": 0, 00:09:12.366 "data_size": 63488 00:09:12.366 }, 00:09:12.366 { 00:09:12.366 "name": "BaseBdev2", 00:09:12.366 "uuid": "608a5176-0a36-4b1c-b4e7-9a9eeea2fce5", 00:09:12.366 "is_configured": true, 00:09:12.366 "data_offset": 2048, 00:09:12.366 "data_size": 63488 00:09:12.366 }, 00:09:12.366 { 00:09:12.366 "name": "BaseBdev3", 00:09:12.366 "uuid": "aac88605-58fa-460b-9e13-59d958bc86f4", 00:09:12.366 "is_configured": true, 00:09:12.366 "data_offset": 2048, 00:09:12.366 "data_size": 63488 00:09:12.366 }, 00:09:12.366 { 00:09:12.366 "name": "BaseBdev4", 00:09:12.366 "uuid": "1a26e20f-3e62-430e-88ec-be925848775e", 00:09:12.366 "is_configured": true, 00:09:12.366 "data_offset": 2048, 00:09:12.366 "data_size": 63488 00:09:12.366 } 00:09:12.366 ] 00:09:12.366 }' 00:09:12.366 18:40:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:12.366 18:40:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.625 18:40:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.625 18:40:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:12.625 18:40:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.625 18:40:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.625 18:40:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.625 18:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:12.625 18:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.625 18:40:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.625 18:40:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.625 18:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:12.625 18:40:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.625 18:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u d160bc8b-cde0-4108-bfdf-89deac697c88 00:09:12.625 18:40:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.625 18:40:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.885 [2024-12-15 18:40:13.073010] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:12.885 [2024-12-15 18:40:13.073315] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:09:12.885 [2024-12-15 18:40:13.073367] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:09:12.885 NewBaseBdev 00:09:12.885 [2024-12-15 18:40:13.073698] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:12.885 [2024-12-15 18:40:13.073848] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:09:12.885 [2024-12-15 18:40:13.073862] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:09:12.885 [2024-12-15 18:40:13.073961] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:12.885 18:40:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.885 18:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:12.885 18:40:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:09:12.885 18:40:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:12.885 18:40:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:12.885 18:40:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:12.885 18:40:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:12.885 18:40:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:12.885 18:40:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.885 18:40:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.885 18:40:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.885 18:40:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:12.885 18:40:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.885 18:40:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.885 [ 00:09:12.885 { 00:09:12.885 "name": "NewBaseBdev", 00:09:12.885 "aliases": [ 00:09:12.885 "d160bc8b-cde0-4108-bfdf-89deac697c88" 00:09:12.885 ], 00:09:12.885 "product_name": "Malloc disk", 00:09:12.885 "block_size": 512, 00:09:12.885 "num_blocks": 65536, 00:09:12.885 "uuid": "d160bc8b-cde0-4108-bfdf-89deac697c88", 00:09:12.885 "assigned_rate_limits": { 00:09:12.885 "rw_ios_per_sec": 0, 00:09:12.885 "rw_mbytes_per_sec": 0, 00:09:12.885 "r_mbytes_per_sec": 0, 00:09:12.885 "w_mbytes_per_sec": 0 00:09:12.885 }, 00:09:12.885 "claimed": true, 00:09:12.885 "claim_type": "exclusive_write", 00:09:12.885 "zoned": false, 00:09:12.885 "supported_io_types": { 00:09:12.885 "read": true, 00:09:12.885 "write": true, 00:09:12.885 "unmap": true, 00:09:12.885 "flush": true, 00:09:12.885 "reset": true, 00:09:12.885 "nvme_admin": false, 00:09:12.885 "nvme_io": false, 00:09:12.885 "nvme_io_md": false, 00:09:12.885 "write_zeroes": true, 00:09:12.885 "zcopy": true, 00:09:12.885 "get_zone_info": false, 00:09:12.885 "zone_management": false, 00:09:12.885 "zone_append": false, 00:09:12.885 "compare": false, 00:09:12.885 "compare_and_write": false, 00:09:12.885 "abort": true, 00:09:12.885 "seek_hole": false, 00:09:12.885 "seek_data": false, 00:09:12.885 "copy": true, 00:09:12.885 "nvme_iov_md": false 00:09:12.885 }, 00:09:12.885 "memory_domains": [ 00:09:12.885 { 00:09:12.885 "dma_device_id": "system", 00:09:12.885 "dma_device_type": 1 00:09:12.885 }, 00:09:12.885 { 00:09:12.885 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:12.885 "dma_device_type": 2 00:09:12.885 } 00:09:12.886 ], 00:09:12.886 "driver_specific": {} 00:09:12.886 } 00:09:12.886 ] 00:09:12.886 18:40:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.886 18:40:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:12.886 18:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:09:12.886 18:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:12.886 18:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:12.886 18:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:12.886 18:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:12.886 18:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:12.886 18:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:12.886 18:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:12.886 18:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:12.886 18:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:12.886 18:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.886 18:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:12.886 18:40:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.886 18:40:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.886 18:40:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.886 18:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:12.886 "name": "Existed_Raid", 00:09:12.886 "uuid": "d4e1bee9-1a51-4de7-9e5b-f6a419bfc0f6", 00:09:12.886 "strip_size_kb": 64, 00:09:12.886 "state": "online", 00:09:12.886 "raid_level": "raid0", 00:09:12.886 "superblock": true, 00:09:12.886 "num_base_bdevs": 4, 00:09:12.886 "num_base_bdevs_discovered": 4, 00:09:12.886 "num_base_bdevs_operational": 4, 00:09:12.886 "base_bdevs_list": [ 00:09:12.886 { 00:09:12.886 "name": "NewBaseBdev", 00:09:12.886 "uuid": "d160bc8b-cde0-4108-bfdf-89deac697c88", 00:09:12.886 "is_configured": true, 00:09:12.886 "data_offset": 2048, 00:09:12.886 "data_size": 63488 00:09:12.886 }, 00:09:12.886 { 00:09:12.886 "name": "BaseBdev2", 00:09:12.886 "uuid": "608a5176-0a36-4b1c-b4e7-9a9eeea2fce5", 00:09:12.886 "is_configured": true, 00:09:12.886 "data_offset": 2048, 00:09:12.886 "data_size": 63488 00:09:12.886 }, 00:09:12.886 { 00:09:12.886 "name": "BaseBdev3", 00:09:12.886 "uuid": "aac88605-58fa-460b-9e13-59d958bc86f4", 00:09:12.886 "is_configured": true, 00:09:12.886 "data_offset": 2048, 00:09:12.886 "data_size": 63488 00:09:12.886 }, 00:09:12.886 { 00:09:12.886 "name": "BaseBdev4", 00:09:12.886 "uuid": "1a26e20f-3e62-430e-88ec-be925848775e", 00:09:12.886 "is_configured": true, 00:09:12.886 "data_offset": 2048, 00:09:12.886 "data_size": 63488 00:09:12.886 } 00:09:12.886 ] 00:09:12.886 }' 00:09:12.886 18:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:12.886 18:40:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.145 18:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:13.145 18:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:13.145 18:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:13.145 18:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:13.145 18:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:13.145 18:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:13.145 18:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:13.145 18:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:13.145 18:40:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.145 18:40:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.145 [2024-12-15 18:40:13.584741] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:13.405 18:40:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.405 18:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:13.405 "name": "Existed_Raid", 00:09:13.405 "aliases": [ 00:09:13.405 "d4e1bee9-1a51-4de7-9e5b-f6a419bfc0f6" 00:09:13.405 ], 00:09:13.405 "product_name": "Raid Volume", 00:09:13.405 "block_size": 512, 00:09:13.405 "num_blocks": 253952, 00:09:13.405 "uuid": "d4e1bee9-1a51-4de7-9e5b-f6a419bfc0f6", 00:09:13.405 "assigned_rate_limits": { 00:09:13.405 "rw_ios_per_sec": 0, 00:09:13.405 "rw_mbytes_per_sec": 0, 00:09:13.405 "r_mbytes_per_sec": 0, 00:09:13.405 "w_mbytes_per_sec": 0 00:09:13.405 }, 00:09:13.405 "claimed": false, 00:09:13.405 "zoned": false, 00:09:13.405 "supported_io_types": { 00:09:13.405 "read": true, 00:09:13.405 "write": true, 00:09:13.405 "unmap": true, 00:09:13.405 "flush": true, 00:09:13.405 "reset": true, 00:09:13.405 "nvme_admin": false, 00:09:13.405 "nvme_io": false, 00:09:13.405 "nvme_io_md": false, 00:09:13.405 "write_zeroes": true, 00:09:13.405 "zcopy": false, 00:09:13.405 "get_zone_info": false, 00:09:13.405 "zone_management": false, 00:09:13.405 "zone_append": false, 00:09:13.405 "compare": false, 00:09:13.405 "compare_and_write": false, 00:09:13.405 "abort": false, 00:09:13.405 "seek_hole": false, 00:09:13.405 "seek_data": false, 00:09:13.405 "copy": false, 00:09:13.405 "nvme_iov_md": false 00:09:13.405 }, 00:09:13.405 "memory_domains": [ 00:09:13.405 { 00:09:13.405 "dma_device_id": "system", 00:09:13.405 "dma_device_type": 1 00:09:13.405 }, 00:09:13.405 { 00:09:13.405 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:13.405 "dma_device_type": 2 00:09:13.405 }, 00:09:13.405 { 00:09:13.405 "dma_device_id": "system", 00:09:13.405 "dma_device_type": 1 00:09:13.405 }, 00:09:13.405 { 00:09:13.405 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:13.405 "dma_device_type": 2 00:09:13.405 }, 00:09:13.405 { 00:09:13.405 "dma_device_id": "system", 00:09:13.405 "dma_device_type": 1 00:09:13.405 }, 00:09:13.405 { 00:09:13.405 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:13.405 "dma_device_type": 2 00:09:13.405 }, 00:09:13.405 { 00:09:13.405 "dma_device_id": "system", 00:09:13.405 "dma_device_type": 1 00:09:13.405 }, 00:09:13.405 { 00:09:13.405 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:13.405 "dma_device_type": 2 00:09:13.405 } 00:09:13.405 ], 00:09:13.405 "driver_specific": { 00:09:13.405 "raid": { 00:09:13.405 "uuid": "d4e1bee9-1a51-4de7-9e5b-f6a419bfc0f6", 00:09:13.405 "strip_size_kb": 64, 00:09:13.405 "state": "online", 00:09:13.405 "raid_level": "raid0", 00:09:13.405 "superblock": true, 00:09:13.405 "num_base_bdevs": 4, 00:09:13.405 "num_base_bdevs_discovered": 4, 00:09:13.405 "num_base_bdevs_operational": 4, 00:09:13.405 "base_bdevs_list": [ 00:09:13.405 { 00:09:13.405 "name": "NewBaseBdev", 00:09:13.405 "uuid": "d160bc8b-cde0-4108-bfdf-89deac697c88", 00:09:13.405 "is_configured": true, 00:09:13.405 "data_offset": 2048, 00:09:13.405 "data_size": 63488 00:09:13.405 }, 00:09:13.405 { 00:09:13.405 "name": "BaseBdev2", 00:09:13.405 "uuid": "608a5176-0a36-4b1c-b4e7-9a9eeea2fce5", 00:09:13.405 "is_configured": true, 00:09:13.405 "data_offset": 2048, 00:09:13.405 "data_size": 63488 00:09:13.405 }, 00:09:13.405 { 00:09:13.405 "name": "BaseBdev3", 00:09:13.405 "uuid": "aac88605-58fa-460b-9e13-59d958bc86f4", 00:09:13.405 "is_configured": true, 00:09:13.405 "data_offset": 2048, 00:09:13.405 "data_size": 63488 00:09:13.405 }, 00:09:13.405 { 00:09:13.405 "name": "BaseBdev4", 00:09:13.405 "uuid": "1a26e20f-3e62-430e-88ec-be925848775e", 00:09:13.405 "is_configured": true, 00:09:13.405 "data_offset": 2048, 00:09:13.405 "data_size": 63488 00:09:13.405 } 00:09:13.405 ] 00:09:13.405 } 00:09:13.405 } 00:09:13.405 }' 00:09:13.405 18:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:13.405 18:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:13.405 BaseBdev2 00:09:13.406 BaseBdev3 00:09:13.406 BaseBdev4' 00:09:13.406 18:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:13.406 18:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:13.406 18:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:13.406 18:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:13.406 18:40:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.406 18:40:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.406 18:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:13.406 18:40:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.406 18:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:13.406 18:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:13.406 18:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:13.406 18:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:13.406 18:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:13.406 18:40:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.406 18:40:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.406 18:40:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.406 18:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:13.406 18:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:13.406 18:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:13.406 18:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:13.406 18:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:13.406 18:40:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.406 18:40:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.406 18:40:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.669 18:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:13.669 18:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:13.669 18:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:13.669 18:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:09:13.669 18:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:13.669 18:40:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.669 18:40:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.669 18:40:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.669 18:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:13.669 18:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:13.669 18:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:13.669 18:40:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.669 18:40:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.669 [2024-12-15 18:40:13.915761] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:13.669 [2024-12-15 18:40:13.915838] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:13.669 [2024-12-15 18:40:13.915938] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:13.669 [2024-12-15 18:40:13.916023] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:13.669 [2024-12-15 18:40:13.916066] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:09:13.669 18:40:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.669 18:40:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 82925 00:09:13.669 18:40:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 82925 ']' 00:09:13.669 18:40:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 82925 00:09:13.669 18:40:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:09:13.669 18:40:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:13.669 18:40:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82925 00:09:13.669 killing process with pid 82925 00:09:13.669 18:40:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:13.669 18:40:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:13.669 18:40:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82925' 00:09:13.669 18:40:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 82925 00:09:13.669 [2024-12-15 18:40:13.964831] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:13.669 18:40:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 82925 00:09:13.669 [2024-12-15 18:40:14.006500] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:13.933 18:40:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:13.933 00:09:13.933 real 0m9.807s 00:09:13.933 user 0m16.744s 00:09:13.933 sys 0m2.083s 00:09:13.933 18:40:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:13.933 18:40:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.933 ************************************ 00:09:13.933 END TEST raid_state_function_test_sb 00:09:13.933 ************************************ 00:09:13.933 18:40:14 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:09:13.933 18:40:14 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:13.933 18:40:14 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:13.933 18:40:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:13.933 ************************************ 00:09:13.933 START TEST raid_superblock_test 00:09:13.933 ************************************ 00:09:13.933 18:40:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 4 00:09:13.933 18:40:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:09:13.933 18:40:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:09:13.933 18:40:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:13.933 18:40:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:13.933 18:40:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:13.933 18:40:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:13.933 18:40:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:13.933 18:40:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:13.933 18:40:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:13.933 18:40:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:13.933 18:40:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:13.933 18:40:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:13.933 18:40:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:13.933 18:40:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:09:13.933 18:40:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:09:13.933 18:40:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:09:13.933 18:40:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=83579 00:09:13.933 18:40:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 83579 00:09:13.933 18:40:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:13.933 18:40:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 83579 ']' 00:09:13.933 18:40:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:13.933 18:40:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:13.933 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:13.933 18:40:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:13.933 18:40:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:13.933 18:40:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.192 [2024-12-15 18:40:14.394552] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:09:14.192 [2024-12-15 18:40:14.394775] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83579 ] 00:09:14.193 [2024-12-15 18:40:14.564761] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:14.193 [2024-12-15 18:40:14.591935] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:14.452 [2024-12-15 18:40:14.635691] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:14.452 [2024-12-15 18:40:14.635750] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:15.021 18:40:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:15.021 18:40:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:09:15.021 18:40:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:15.021 18:40:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:15.021 18:40:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:15.021 18:40:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:15.021 18:40:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:15.021 18:40:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:15.021 18:40:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:15.021 18:40:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:15.021 18:40:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:15.021 18:40:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.021 18:40:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.021 malloc1 00:09:15.021 18:40:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.021 18:40:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:15.021 18:40:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.021 18:40:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.021 [2024-12-15 18:40:15.243957] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:15.021 [2024-12-15 18:40:15.244066] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:15.021 [2024-12-15 18:40:15.244107] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:15.021 [2024-12-15 18:40:15.244158] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:15.021 [2024-12-15 18:40:15.246414] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:15.021 [2024-12-15 18:40:15.246490] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:15.021 pt1 00:09:15.021 18:40:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.021 18:40:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:15.021 18:40:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:15.021 18:40:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:15.021 18:40:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:15.021 18:40:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:15.021 18:40:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:15.021 18:40:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:15.021 18:40:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:15.021 18:40:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:15.021 18:40:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.021 18:40:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.021 malloc2 00:09:15.021 18:40:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.021 18:40:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:15.021 18:40:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.021 18:40:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.021 [2024-12-15 18:40:15.276718] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:15.021 [2024-12-15 18:40:15.276835] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:15.021 [2024-12-15 18:40:15.276885] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:15.021 [2024-12-15 18:40:15.276918] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:15.021 [2024-12-15 18:40:15.279184] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:15.021 [2024-12-15 18:40:15.279256] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:15.021 pt2 00:09:15.021 18:40:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.021 18:40:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:15.021 18:40:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:15.021 18:40:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:09:15.021 18:40:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:09:15.021 18:40:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:09:15.021 18:40:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:15.021 18:40:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:15.021 18:40:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:15.021 18:40:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:09:15.021 18:40:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.021 18:40:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.021 malloc3 00:09:15.021 18:40:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.021 18:40:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:15.021 18:40:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.021 18:40:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.021 [2024-12-15 18:40:15.305356] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:15.021 [2024-12-15 18:40:15.305449] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:15.021 [2024-12-15 18:40:15.305487] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:15.021 [2024-12-15 18:40:15.305518] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:15.021 [2024-12-15 18:40:15.307556] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:15.021 [2024-12-15 18:40:15.307629] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:15.021 pt3 00:09:15.021 18:40:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.021 18:40:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:15.021 18:40:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:15.021 18:40:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:09:15.021 18:40:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:09:15.021 18:40:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:09:15.021 18:40:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:15.021 18:40:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:15.021 18:40:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:15.021 18:40:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:09:15.021 18:40:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.021 18:40:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.021 malloc4 00:09:15.021 18:40:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.021 18:40:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:09:15.021 18:40:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.021 18:40:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.021 [2024-12-15 18:40:15.344748] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:09:15.021 [2024-12-15 18:40:15.344869] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:15.021 [2024-12-15 18:40:15.344913] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:15.021 [2024-12-15 18:40:15.344952] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:15.021 [2024-12-15 18:40:15.347120] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:15.021 [2024-12-15 18:40:15.347190] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:09:15.021 pt4 00:09:15.021 18:40:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.021 18:40:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:15.021 18:40:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:15.021 18:40:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:09:15.021 18:40:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.022 18:40:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.022 [2024-12-15 18:40:15.356776] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:15.022 [2024-12-15 18:40:15.358608] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:15.022 [2024-12-15 18:40:15.358724] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:15.022 [2024-12-15 18:40:15.358808] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:09:15.022 [2024-12-15 18:40:15.358991] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:09:15.022 [2024-12-15 18:40:15.359041] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:09:15.022 [2024-12-15 18:40:15.359290] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:15.022 [2024-12-15 18:40:15.359465] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:09:15.022 [2024-12-15 18:40:15.359506] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:09:15.022 [2024-12-15 18:40:15.359647] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:15.022 18:40:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.022 18:40:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:09:15.022 18:40:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:15.022 18:40:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:15.022 18:40:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:15.022 18:40:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:15.022 18:40:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:15.022 18:40:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:15.022 18:40:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:15.022 18:40:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:15.022 18:40:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:15.022 18:40:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.022 18:40:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:15.022 18:40:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.022 18:40:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.022 18:40:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.022 18:40:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:15.022 "name": "raid_bdev1", 00:09:15.022 "uuid": "336df15f-0b06-481e-b62f-2df6f7133cb0", 00:09:15.022 "strip_size_kb": 64, 00:09:15.022 "state": "online", 00:09:15.022 "raid_level": "raid0", 00:09:15.022 "superblock": true, 00:09:15.022 "num_base_bdevs": 4, 00:09:15.022 "num_base_bdevs_discovered": 4, 00:09:15.022 "num_base_bdevs_operational": 4, 00:09:15.022 "base_bdevs_list": [ 00:09:15.022 { 00:09:15.022 "name": "pt1", 00:09:15.022 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:15.022 "is_configured": true, 00:09:15.022 "data_offset": 2048, 00:09:15.022 "data_size": 63488 00:09:15.022 }, 00:09:15.022 { 00:09:15.022 "name": "pt2", 00:09:15.022 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:15.022 "is_configured": true, 00:09:15.022 "data_offset": 2048, 00:09:15.022 "data_size": 63488 00:09:15.022 }, 00:09:15.022 { 00:09:15.022 "name": "pt3", 00:09:15.022 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:15.022 "is_configured": true, 00:09:15.022 "data_offset": 2048, 00:09:15.022 "data_size": 63488 00:09:15.022 }, 00:09:15.022 { 00:09:15.022 "name": "pt4", 00:09:15.022 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:15.022 "is_configured": true, 00:09:15.022 "data_offset": 2048, 00:09:15.022 "data_size": 63488 00:09:15.022 } 00:09:15.022 ] 00:09:15.022 }' 00:09:15.022 18:40:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:15.022 18:40:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.592 18:40:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:15.592 18:40:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:15.592 18:40:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:15.592 18:40:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:15.592 18:40:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:15.592 18:40:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:15.592 18:40:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:15.592 18:40:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:15.592 18:40:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.592 18:40:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.592 [2024-12-15 18:40:15.824392] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:15.592 18:40:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.592 18:40:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:15.592 "name": "raid_bdev1", 00:09:15.592 "aliases": [ 00:09:15.592 "336df15f-0b06-481e-b62f-2df6f7133cb0" 00:09:15.592 ], 00:09:15.592 "product_name": "Raid Volume", 00:09:15.592 "block_size": 512, 00:09:15.592 "num_blocks": 253952, 00:09:15.592 "uuid": "336df15f-0b06-481e-b62f-2df6f7133cb0", 00:09:15.592 "assigned_rate_limits": { 00:09:15.592 "rw_ios_per_sec": 0, 00:09:15.592 "rw_mbytes_per_sec": 0, 00:09:15.592 "r_mbytes_per_sec": 0, 00:09:15.592 "w_mbytes_per_sec": 0 00:09:15.592 }, 00:09:15.592 "claimed": false, 00:09:15.592 "zoned": false, 00:09:15.592 "supported_io_types": { 00:09:15.592 "read": true, 00:09:15.592 "write": true, 00:09:15.592 "unmap": true, 00:09:15.592 "flush": true, 00:09:15.592 "reset": true, 00:09:15.592 "nvme_admin": false, 00:09:15.592 "nvme_io": false, 00:09:15.592 "nvme_io_md": false, 00:09:15.592 "write_zeroes": true, 00:09:15.592 "zcopy": false, 00:09:15.592 "get_zone_info": false, 00:09:15.592 "zone_management": false, 00:09:15.592 "zone_append": false, 00:09:15.592 "compare": false, 00:09:15.592 "compare_and_write": false, 00:09:15.592 "abort": false, 00:09:15.592 "seek_hole": false, 00:09:15.592 "seek_data": false, 00:09:15.592 "copy": false, 00:09:15.592 "nvme_iov_md": false 00:09:15.592 }, 00:09:15.592 "memory_domains": [ 00:09:15.592 { 00:09:15.592 "dma_device_id": "system", 00:09:15.592 "dma_device_type": 1 00:09:15.592 }, 00:09:15.592 { 00:09:15.592 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:15.592 "dma_device_type": 2 00:09:15.592 }, 00:09:15.592 { 00:09:15.592 "dma_device_id": "system", 00:09:15.592 "dma_device_type": 1 00:09:15.592 }, 00:09:15.592 { 00:09:15.592 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:15.592 "dma_device_type": 2 00:09:15.592 }, 00:09:15.592 { 00:09:15.592 "dma_device_id": "system", 00:09:15.592 "dma_device_type": 1 00:09:15.592 }, 00:09:15.592 { 00:09:15.592 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:15.592 "dma_device_type": 2 00:09:15.592 }, 00:09:15.592 { 00:09:15.592 "dma_device_id": "system", 00:09:15.592 "dma_device_type": 1 00:09:15.592 }, 00:09:15.592 { 00:09:15.592 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:15.592 "dma_device_type": 2 00:09:15.592 } 00:09:15.592 ], 00:09:15.592 "driver_specific": { 00:09:15.592 "raid": { 00:09:15.592 "uuid": "336df15f-0b06-481e-b62f-2df6f7133cb0", 00:09:15.592 "strip_size_kb": 64, 00:09:15.592 "state": "online", 00:09:15.592 "raid_level": "raid0", 00:09:15.592 "superblock": true, 00:09:15.592 "num_base_bdevs": 4, 00:09:15.592 "num_base_bdevs_discovered": 4, 00:09:15.592 "num_base_bdevs_operational": 4, 00:09:15.592 "base_bdevs_list": [ 00:09:15.592 { 00:09:15.592 "name": "pt1", 00:09:15.592 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:15.592 "is_configured": true, 00:09:15.592 "data_offset": 2048, 00:09:15.592 "data_size": 63488 00:09:15.592 }, 00:09:15.592 { 00:09:15.592 "name": "pt2", 00:09:15.592 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:15.592 "is_configured": true, 00:09:15.592 "data_offset": 2048, 00:09:15.592 "data_size": 63488 00:09:15.592 }, 00:09:15.592 { 00:09:15.592 "name": "pt3", 00:09:15.592 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:15.592 "is_configured": true, 00:09:15.592 "data_offset": 2048, 00:09:15.592 "data_size": 63488 00:09:15.592 }, 00:09:15.592 { 00:09:15.592 "name": "pt4", 00:09:15.592 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:15.592 "is_configured": true, 00:09:15.592 "data_offset": 2048, 00:09:15.592 "data_size": 63488 00:09:15.592 } 00:09:15.592 ] 00:09:15.592 } 00:09:15.592 } 00:09:15.592 }' 00:09:15.592 18:40:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:15.592 18:40:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:15.592 pt2 00:09:15.592 pt3 00:09:15.592 pt4' 00:09:15.592 18:40:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:15.593 18:40:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:15.593 18:40:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:15.593 18:40:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:15.593 18:40:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.593 18:40:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.593 18:40:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:15.593 18:40:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.593 18:40:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:15.593 18:40:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:15.593 18:40:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:15.593 18:40:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:15.593 18:40:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:15.593 18:40:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.593 18:40:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.593 18:40:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.855 18:40:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:15.855 18:40:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:15.855 18:40:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:15.855 18:40:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:15.856 18:40:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:15.856 18:40:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.856 18:40:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.856 18:40:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.856 18:40:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:15.856 18:40:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:15.856 18:40:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:15.856 18:40:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:09:15.856 18:40:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:15.856 18:40:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.856 18:40:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.856 18:40:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.856 18:40:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:15.856 18:40:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:15.856 18:40:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:15.856 18:40:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:15.856 18:40:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.856 18:40:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.856 [2024-12-15 18:40:16.151764] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:15.856 18:40:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.856 18:40:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=336df15f-0b06-481e-b62f-2df6f7133cb0 00:09:15.856 18:40:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 336df15f-0b06-481e-b62f-2df6f7133cb0 ']' 00:09:15.856 18:40:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:15.856 18:40:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.856 18:40:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.856 [2024-12-15 18:40:16.191390] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:15.856 [2024-12-15 18:40:16.191457] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:15.856 [2024-12-15 18:40:16.191565] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:15.856 [2024-12-15 18:40:16.191655] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:15.856 [2024-12-15 18:40:16.191745] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:09:15.856 18:40:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.856 18:40:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.856 18:40:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:15.856 18:40:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.856 18:40:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.856 18:40:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.856 18:40:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:15.856 18:40:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:15.856 18:40:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:15.856 18:40:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:15.856 18:40:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.856 18:40:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.856 18:40:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.856 18:40:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:15.856 18:40:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:15.856 18:40:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.856 18:40:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.856 18:40:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.856 18:40:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:15.856 18:40:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:09:15.856 18:40:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.856 18:40:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.856 18:40:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.856 18:40:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:15.856 18:40:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:09:15.856 18:40:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.856 18:40:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.117 18:40:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.117 18:40:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:16.117 18:40:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.117 18:40:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.117 18:40:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:16.117 18:40:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.117 18:40:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:16.117 18:40:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:09:16.117 18:40:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:09:16.117 18:40:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:09:16.117 18:40:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:09:16.117 18:40:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:16.117 18:40:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:09:16.117 18:40:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:16.117 18:40:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:09:16.117 18:40:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.117 18:40:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.117 [2024-12-15 18:40:16.359152] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:16.117 [2024-12-15 18:40:16.361102] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:16.117 [2024-12-15 18:40:16.361208] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:09:16.117 [2024-12-15 18:40:16.361257] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:09:16.117 [2024-12-15 18:40:16.361329] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:16.117 [2024-12-15 18:40:16.361440] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:16.117 [2024-12-15 18:40:16.361503] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:09:16.117 [2024-12-15 18:40:16.361590] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:09:16.117 [2024-12-15 18:40:16.361648] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:16.117 [2024-12-15 18:40:16.361698] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:09:16.117 request: 00:09:16.117 { 00:09:16.117 "name": "raid_bdev1", 00:09:16.117 "raid_level": "raid0", 00:09:16.117 "base_bdevs": [ 00:09:16.117 "malloc1", 00:09:16.117 "malloc2", 00:09:16.117 "malloc3", 00:09:16.117 "malloc4" 00:09:16.117 ], 00:09:16.117 "strip_size_kb": 64, 00:09:16.117 "superblock": false, 00:09:16.117 "method": "bdev_raid_create", 00:09:16.117 "req_id": 1 00:09:16.117 } 00:09:16.117 Got JSON-RPC error response 00:09:16.117 response: 00:09:16.117 { 00:09:16.117 "code": -17, 00:09:16.117 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:16.117 } 00:09:16.117 18:40:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:16.117 18:40:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:09:16.117 18:40:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:16.117 18:40:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:16.117 18:40:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:16.117 18:40:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:16.117 18:40:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.117 18:40:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.117 18:40:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:16.117 18:40:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.117 18:40:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:16.117 18:40:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:16.117 18:40:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:16.117 18:40:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.117 18:40:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.117 [2024-12-15 18:40:16.422994] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:16.117 [2024-12-15 18:40:16.423107] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:16.117 [2024-12-15 18:40:16.423149] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:16.117 [2024-12-15 18:40:16.423178] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:16.117 [2024-12-15 18:40:16.425607] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:16.117 [2024-12-15 18:40:16.425690] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:16.117 [2024-12-15 18:40:16.425817] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:16.117 [2024-12-15 18:40:16.425882] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:16.117 pt1 00:09:16.117 18:40:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.117 18:40:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:09:16.117 18:40:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:16.117 18:40:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:16.117 18:40:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:16.117 18:40:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:16.117 18:40:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:16.117 18:40:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:16.117 18:40:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:16.117 18:40:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:16.117 18:40:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:16.117 18:40:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:16.117 18:40:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.117 18:40:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.117 18:40:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:16.117 18:40:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.117 18:40:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:16.117 "name": "raid_bdev1", 00:09:16.117 "uuid": "336df15f-0b06-481e-b62f-2df6f7133cb0", 00:09:16.117 "strip_size_kb": 64, 00:09:16.117 "state": "configuring", 00:09:16.117 "raid_level": "raid0", 00:09:16.117 "superblock": true, 00:09:16.117 "num_base_bdevs": 4, 00:09:16.117 "num_base_bdevs_discovered": 1, 00:09:16.117 "num_base_bdevs_operational": 4, 00:09:16.117 "base_bdevs_list": [ 00:09:16.117 { 00:09:16.117 "name": "pt1", 00:09:16.117 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:16.117 "is_configured": true, 00:09:16.117 "data_offset": 2048, 00:09:16.117 "data_size": 63488 00:09:16.117 }, 00:09:16.117 { 00:09:16.117 "name": null, 00:09:16.117 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:16.117 "is_configured": false, 00:09:16.117 "data_offset": 2048, 00:09:16.117 "data_size": 63488 00:09:16.117 }, 00:09:16.117 { 00:09:16.117 "name": null, 00:09:16.117 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:16.117 "is_configured": false, 00:09:16.117 "data_offset": 2048, 00:09:16.117 "data_size": 63488 00:09:16.117 }, 00:09:16.117 { 00:09:16.117 "name": null, 00:09:16.117 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:16.117 "is_configured": false, 00:09:16.117 "data_offset": 2048, 00:09:16.117 "data_size": 63488 00:09:16.117 } 00:09:16.117 ] 00:09:16.117 }' 00:09:16.117 18:40:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:16.117 18:40:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.688 18:40:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:09:16.688 18:40:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:16.688 18:40:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.688 18:40:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.688 [2024-12-15 18:40:16.870240] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:16.688 [2024-12-15 18:40:16.870347] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:16.688 [2024-12-15 18:40:16.870389] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:09:16.688 [2024-12-15 18:40:16.870418] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:16.688 [2024-12-15 18:40:16.870856] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:16.688 [2024-12-15 18:40:16.870877] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:16.688 [2024-12-15 18:40:16.870963] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:16.688 [2024-12-15 18:40:16.870985] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:16.688 pt2 00:09:16.688 18:40:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.688 18:40:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:09:16.688 18:40:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.688 18:40:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.688 [2024-12-15 18:40:16.882212] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:09:16.688 18:40:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.688 18:40:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:09:16.688 18:40:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:16.688 18:40:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:16.688 18:40:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:16.688 18:40:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:16.688 18:40:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:16.688 18:40:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:16.688 18:40:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:16.688 18:40:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:16.688 18:40:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:16.688 18:40:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:16.688 18:40:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:16.688 18:40:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.688 18:40:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.688 18:40:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.688 18:40:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:16.688 "name": "raid_bdev1", 00:09:16.688 "uuid": "336df15f-0b06-481e-b62f-2df6f7133cb0", 00:09:16.688 "strip_size_kb": 64, 00:09:16.688 "state": "configuring", 00:09:16.688 "raid_level": "raid0", 00:09:16.688 "superblock": true, 00:09:16.688 "num_base_bdevs": 4, 00:09:16.688 "num_base_bdevs_discovered": 1, 00:09:16.688 "num_base_bdevs_operational": 4, 00:09:16.688 "base_bdevs_list": [ 00:09:16.688 { 00:09:16.688 "name": "pt1", 00:09:16.688 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:16.688 "is_configured": true, 00:09:16.688 "data_offset": 2048, 00:09:16.688 "data_size": 63488 00:09:16.688 }, 00:09:16.688 { 00:09:16.688 "name": null, 00:09:16.688 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:16.688 "is_configured": false, 00:09:16.688 "data_offset": 0, 00:09:16.688 "data_size": 63488 00:09:16.688 }, 00:09:16.688 { 00:09:16.688 "name": null, 00:09:16.688 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:16.688 "is_configured": false, 00:09:16.688 "data_offset": 2048, 00:09:16.688 "data_size": 63488 00:09:16.688 }, 00:09:16.688 { 00:09:16.688 "name": null, 00:09:16.688 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:16.688 "is_configured": false, 00:09:16.688 "data_offset": 2048, 00:09:16.688 "data_size": 63488 00:09:16.688 } 00:09:16.688 ] 00:09:16.688 }' 00:09:16.688 18:40:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:16.688 18:40:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.947 18:40:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:16.947 18:40:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:16.947 18:40:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:16.947 18:40:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.947 18:40:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.948 [2024-12-15 18:40:17.297535] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:16.948 [2024-12-15 18:40:17.297668] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:16.948 [2024-12-15 18:40:17.297706] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:09:16.948 [2024-12-15 18:40:17.297749] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:16.948 [2024-12-15 18:40:17.298179] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:16.948 [2024-12-15 18:40:17.298238] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:16.948 [2024-12-15 18:40:17.298319] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:16.948 [2024-12-15 18:40:17.298343] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:16.948 pt2 00:09:16.948 18:40:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.948 18:40:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:16.948 18:40:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:16.948 18:40:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:16.948 18:40:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.948 18:40:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.948 [2024-12-15 18:40:17.309477] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:16.948 [2024-12-15 18:40:17.309563] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:16.948 [2024-12-15 18:40:17.309594] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:09:16.948 [2024-12-15 18:40:17.309623] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:16.948 [2024-12-15 18:40:17.309968] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:16.948 [2024-12-15 18:40:17.310023] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:16.948 [2024-12-15 18:40:17.310105] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:16.948 [2024-12-15 18:40:17.310158] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:16.948 pt3 00:09:16.948 18:40:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.948 18:40:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:16.948 18:40:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:16.948 18:40:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:09:16.948 18:40:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.948 18:40:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.948 [2024-12-15 18:40:17.321441] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:09:16.948 [2024-12-15 18:40:17.321523] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:16.948 [2024-12-15 18:40:17.321552] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:09:16.948 [2024-12-15 18:40:17.321583] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:16.948 [2024-12-15 18:40:17.321906] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:16.948 [2024-12-15 18:40:17.321959] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:09:16.948 [2024-12-15 18:40:17.322038] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:09:16.948 [2024-12-15 18:40:17.322084] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:09:16.948 [2024-12-15 18:40:17.322200] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:09:16.948 [2024-12-15 18:40:17.322241] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:09:16.948 [2024-12-15 18:40:17.322480] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:16.948 [2024-12-15 18:40:17.322633] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:09:16.948 [2024-12-15 18:40:17.322672] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:09:16.948 [2024-12-15 18:40:17.322816] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:16.948 pt4 00:09:16.948 18:40:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.948 18:40:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:16.948 18:40:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:16.948 18:40:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:09:16.948 18:40:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:16.948 18:40:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:16.948 18:40:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:16.948 18:40:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:16.948 18:40:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:16.948 18:40:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:16.948 18:40:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:16.948 18:40:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:16.948 18:40:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:16.948 18:40:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:16.948 18:40:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:16.948 18:40:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.948 18:40:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.948 18:40:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.948 18:40:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:16.948 "name": "raid_bdev1", 00:09:16.948 "uuid": "336df15f-0b06-481e-b62f-2df6f7133cb0", 00:09:16.948 "strip_size_kb": 64, 00:09:16.948 "state": "online", 00:09:16.948 "raid_level": "raid0", 00:09:16.948 "superblock": true, 00:09:16.948 "num_base_bdevs": 4, 00:09:16.948 "num_base_bdevs_discovered": 4, 00:09:16.948 "num_base_bdevs_operational": 4, 00:09:16.948 "base_bdevs_list": [ 00:09:16.948 { 00:09:16.948 "name": "pt1", 00:09:16.948 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:16.948 "is_configured": true, 00:09:16.948 "data_offset": 2048, 00:09:16.948 "data_size": 63488 00:09:16.948 }, 00:09:16.948 { 00:09:16.948 "name": "pt2", 00:09:16.948 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:16.948 "is_configured": true, 00:09:16.948 "data_offset": 2048, 00:09:16.948 "data_size": 63488 00:09:16.948 }, 00:09:16.948 { 00:09:16.948 "name": "pt3", 00:09:16.948 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:16.948 "is_configured": true, 00:09:16.948 "data_offset": 2048, 00:09:16.948 "data_size": 63488 00:09:16.948 }, 00:09:16.948 { 00:09:16.948 "name": "pt4", 00:09:16.948 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:16.948 "is_configured": true, 00:09:16.948 "data_offset": 2048, 00:09:16.948 "data_size": 63488 00:09:16.948 } 00:09:16.948 ] 00:09:16.948 }' 00:09:16.948 18:40:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:16.948 18:40:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.518 18:40:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:17.518 18:40:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:17.518 18:40:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:17.518 18:40:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:17.518 18:40:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:17.518 18:40:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:17.518 18:40:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:17.518 18:40:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:17.518 18:40:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.518 18:40:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.518 [2024-12-15 18:40:17.745106] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:17.518 18:40:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.518 18:40:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:17.518 "name": "raid_bdev1", 00:09:17.518 "aliases": [ 00:09:17.518 "336df15f-0b06-481e-b62f-2df6f7133cb0" 00:09:17.518 ], 00:09:17.518 "product_name": "Raid Volume", 00:09:17.518 "block_size": 512, 00:09:17.518 "num_blocks": 253952, 00:09:17.518 "uuid": "336df15f-0b06-481e-b62f-2df6f7133cb0", 00:09:17.518 "assigned_rate_limits": { 00:09:17.518 "rw_ios_per_sec": 0, 00:09:17.518 "rw_mbytes_per_sec": 0, 00:09:17.518 "r_mbytes_per_sec": 0, 00:09:17.518 "w_mbytes_per_sec": 0 00:09:17.518 }, 00:09:17.518 "claimed": false, 00:09:17.518 "zoned": false, 00:09:17.518 "supported_io_types": { 00:09:17.518 "read": true, 00:09:17.518 "write": true, 00:09:17.518 "unmap": true, 00:09:17.518 "flush": true, 00:09:17.518 "reset": true, 00:09:17.518 "nvme_admin": false, 00:09:17.518 "nvme_io": false, 00:09:17.518 "nvme_io_md": false, 00:09:17.518 "write_zeroes": true, 00:09:17.518 "zcopy": false, 00:09:17.518 "get_zone_info": false, 00:09:17.518 "zone_management": false, 00:09:17.518 "zone_append": false, 00:09:17.518 "compare": false, 00:09:17.518 "compare_and_write": false, 00:09:17.518 "abort": false, 00:09:17.518 "seek_hole": false, 00:09:17.518 "seek_data": false, 00:09:17.518 "copy": false, 00:09:17.518 "nvme_iov_md": false 00:09:17.518 }, 00:09:17.518 "memory_domains": [ 00:09:17.518 { 00:09:17.518 "dma_device_id": "system", 00:09:17.518 "dma_device_type": 1 00:09:17.518 }, 00:09:17.518 { 00:09:17.518 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:17.518 "dma_device_type": 2 00:09:17.518 }, 00:09:17.518 { 00:09:17.518 "dma_device_id": "system", 00:09:17.518 "dma_device_type": 1 00:09:17.518 }, 00:09:17.518 { 00:09:17.518 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:17.518 "dma_device_type": 2 00:09:17.518 }, 00:09:17.518 { 00:09:17.518 "dma_device_id": "system", 00:09:17.518 "dma_device_type": 1 00:09:17.518 }, 00:09:17.518 { 00:09:17.518 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:17.518 "dma_device_type": 2 00:09:17.518 }, 00:09:17.518 { 00:09:17.518 "dma_device_id": "system", 00:09:17.518 "dma_device_type": 1 00:09:17.518 }, 00:09:17.518 { 00:09:17.518 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:17.518 "dma_device_type": 2 00:09:17.518 } 00:09:17.518 ], 00:09:17.518 "driver_specific": { 00:09:17.518 "raid": { 00:09:17.518 "uuid": "336df15f-0b06-481e-b62f-2df6f7133cb0", 00:09:17.518 "strip_size_kb": 64, 00:09:17.518 "state": "online", 00:09:17.518 "raid_level": "raid0", 00:09:17.518 "superblock": true, 00:09:17.518 "num_base_bdevs": 4, 00:09:17.518 "num_base_bdevs_discovered": 4, 00:09:17.518 "num_base_bdevs_operational": 4, 00:09:17.518 "base_bdevs_list": [ 00:09:17.518 { 00:09:17.518 "name": "pt1", 00:09:17.518 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:17.518 "is_configured": true, 00:09:17.518 "data_offset": 2048, 00:09:17.518 "data_size": 63488 00:09:17.518 }, 00:09:17.518 { 00:09:17.518 "name": "pt2", 00:09:17.518 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:17.518 "is_configured": true, 00:09:17.518 "data_offset": 2048, 00:09:17.518 "data_size": 63488 00:09:17.518 }, 00:09:17.518 { 00:09:17.518 "name": "pt3", 00:09:17.518 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:17.518 "is_configured": true, 00:09:17.518 "data_offset": 2048, 00:09:17.518 "data_size": 63488 00:09:17.518 }, 00:09:17.518 { 00:09:17.519 "name": "pt4", 00:09:17.519 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:17.519 "is_configured": true, 00:09:17.519 "data_offset": 2048, 00:09:17.519 "data_size": 63488 00:09:17.519 } 00:09:17.519 ] 00:09:17.519 } 00:09:17.519 } 00:09:17.519 }' 00:09:17.519 18:40:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:17.519 18:40:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:17.519 pt2 00:09:17.519 pt3 00:09:17.519 pt4' 00:09:17.519 18:40:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:17.519 18:40:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:17.519 18:40:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:17.519 18:40:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:17.519 18:40:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:17.519 18:40:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.519 18:40:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.519 18:40:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.519 18:40:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:17.519 18:40:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:17.519 18:40:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:17.519 18:40:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:17.519 18:40:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.519 18:40:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:17.519 18:40:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.780 18:40:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.780 18:40:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:17.780 18:40:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:17.780 18:40:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:17.780 18:40:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:17.780 18:40:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:17.780 18:40:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.780 18:40:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.780 18:40:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.780 18:40:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:17.780 18:40:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:17.780 18:40:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:17.780 18:40:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:17.780 18:40:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:09:17.780 18:40:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.780 18:40:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.780 18:40:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.780 18:40:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:17.780 18:40:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:17.780 18:40:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:17.780 18:40:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:17.780 18:40:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.780 18:40:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.780 [2024-12-15 18:40:18.108444] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:17.780 18:40:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.780 18:40:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 336df15f-0b06-481e-b62f-2df6f7133cb0 '!=' 336df15f-0b06-481e-b62f-2df6f7133cb0 ']' 00:09:17.780 18:40:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:09:17.780 18:40:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:17.780 18:40:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:17.780 18:40:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 83579 00:09:17.780 18:40:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 83579 ']' 00:09:17.780 18:40:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 83579 00:09:17.780 18:40:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:09:17.780 18:40:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:17.780 18:40:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83579 00:09:17.780 killing process with pid 83579 00:09:17.780 18:40:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:17.780 18:40:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:17.780 18:40:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83579' 00:09:17.780 18:40:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 83579 00:09:17.780 [2024-12-15 18:40:18.194531] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:17.780 [2024-12-15 18:40:18.194638] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:17.780 18:40:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 83579 00:09:17.780 [2024-12-15 18:40:18.194707] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:17.780 [2024-12-15 18:40:18.194719] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:09:18.040 [2024-12-15 18:40:18.240195] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:18.040 18:40:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:18.040 00:09:18.040 real 0m4.153s 00:09:18.040 user 0m6.540s 00:09:18.040 sys 0m0.968s 00:09:18.040 18:40:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:18.040 18:40:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.040 ************************************ 00:09:18.040 END TEST raid_superblock_test 00:09:18.040 ************************************ 00:09:18.300 18:40:18 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:09:18.300 18:40:18 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:18.300 18:40:18 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:18.300 18:40:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:18.300 ************************************ 00:09:18.300 START TEST raid_read_error_test 00:09:18.300 ************************************ 00:09:18.300 18:40:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 read 00:09:18.300 18:40:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:09:18.300 18:40:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:09:18.300 18:40:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:18.300 18:40:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:18.300 18:40:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:18.300 18:40:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:18.300 18:40:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:18.300 18:40:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:18.300 18:40:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:18.300 18:40:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:18.300 18:40:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:18.300 18:40:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:18.301 18:40:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:18.301 18:40:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:18.301 18:40:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:09:18.301 18:40:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:18.301 18:40:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:18.301 18:40:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:09:18.301 18:40:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:18.301 18:40:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:18.301 18:40:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:18.301 18:40:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:18.301 18:40:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:18.301 18:40:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:18.301 18:40:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:09:18.301 18:40:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:18.301 18:40:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:18.301 18:40:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:18.301 18:40:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.uTPny69cjn 00:09:18.301 18:40:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=83827 00:09:18.301 18:40:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 83827 00:09:18.301 18:40:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:18.301 18:40:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 83827 ']' 00:09:18.301 18:40:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:18.301 18:40:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:18.301 18:40:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:18.301 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:18.301 18:40:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:18.301 18:40:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.301 [2024-12-15 18:40:18.638395] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:09:18.301 [2024-12-15 18:40:18.638665] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83827 ] 00:09:18.561 [2024-12-15 18:40:18.796904] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:18.561 [2024-12-15 18:40:18.823903] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:18.561 [2024-12-15 18:40:18.866819] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:18.561 [2024-12-15 18:40:18.866927] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:19.132 18:40:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:19.132 18:40:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:19.132 18:40:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:19.132 18:40:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:19.132 18:40:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.132 18:40:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.132 BaseBdev1_malloc 00:09:19.132 18:40:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.132 18:40:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:19.132 18:40:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.132 18:40:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.132 true 00:09:19.132 18:40:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.132 18:40:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:19.132 18:40:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.132 18:40:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.132 [2024-12-15 18:40:19.490872] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:19.132 [2024-12-15 18:40:19.490973] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:19.132 [2024-12-15 18:40:19.491019] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:19.132 [2024-12-15 18:40:19.491052] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:19.132 [2024-12-15 18:40:19.493428] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:19.132 [2024-12-15 18:40:19.493511] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:19.132 BaseBdev1 00:09:19.132 18:40:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.132 18:40:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:19.132 18:40:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:19.132 18:40:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.132 18:40:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.132 BaseBdev2_malloc 00:09:19.132 18:40:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.132 18:40:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:19.132 18:40:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.132 18:40:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.132 true 00:09:19.132 18:40:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.132 18:40:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:19.132 18:40:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.132 18:40:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.132 [2024-12-15 18:40:19.531705] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:19.132 [2024-12-15 18:40:19.531808] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:19.132 [2024-12-15 18:40:19.531847] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:19.132 [2024-12-15 18:40:19.531875] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:19.132 [2024-12-15 18:40:19.533977] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:19.132 [2024-12-15 18:40:19.534047] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:19.132 BaseBdev2 00:09:19.132 18:40:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.132 18:40:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:19.132 18:40:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:19.132 18:40:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.132 18:40:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.132 BaseBdev3_malloc 00:09:19.132 18:40:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.132 18:40:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:19.132 18:40:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.132 18:40:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.132 true 00:09:19.132 18:40:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.132 18:40:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:19.132 18:40:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.132 18:40:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.393 [2024-12-15 18:40:19.572327] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:19.393 [2024-12-15 18:40:19.572439] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:19.393 [2024-12-15 18:40:19.572509] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:19.393 [2024-12-15 18:40:19.572524] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:19.393 [2024-12-15 18:40:19.574874] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:19.393 [2024-12-15 18:40:19.574925] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:19.393 BaseBdev3 00:09:19.393 18:40:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.393 18:40:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:19.393 18:40:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:09:19.393 18:40:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.393 18:40:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.393 BaseBdev4_malloc 00:09:19.393 18:40:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.393 18:40:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:09:19.393 18:40:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.393 18:40:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.393 true 00:09:19.393 18:40:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.393 18:40:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:09:19.393 18:40:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.393 18:40:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.393 [2024-12-15 18:40:19.625714] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:09:19.393 [2024-12-15 18:40:19.625848] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:19.393 [2024-12-15 18:40:19.625896] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:19.393 [2024-12-15 18:40:19.625906] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:19.393 [2024-12-15 18:40:19.628013] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:19.393 [2024-12-15 18:40:19.628051] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:09:19.393 BaseBdev4 00:09:19.393 18:40:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.393 18:40:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:09:19.393 18:40:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.393 18:40:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.393 [2024-12-15 18:40:19.637814] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:19.393 [2024-12-15 18:40:19.639705] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:19.393 [2024-12-15 18:40:19.639852] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:19.393 [2024-12-15 18:40:19.639928] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:19.393 [2024-12-15 18:40:19.640163] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007080 00:09:19.393 [2024-12-15 18:40:19.640208] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:09:19.393 [2024-12-15 18:40:19.640531] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:09:19.393 [2024-12-15 18:40:19.640709] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007080 00:09:19.393 [2024-12-15 18:40:19.640752] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007080 00:09:19.393 [2024-12-15 18:40:19.640979] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:19.393 18:40:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.393 18:40:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:09:19.393 18:40:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:19.393 18:40:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:19.393 18:40:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:19.393 18:40:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:19.393 18:40:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:19.393 18:40:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:19.393 18:40:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:19.393 18:40:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:19.393 18:40:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:19.393 18:40:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.393 18:40:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:19.393 18:40:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.393 18:40:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.394 18:40:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.394 18:40:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:19.394 "name": "raid_bdev1", 00:09:19.394 "uuid": "f099ed56-9a47-4d88-955d-6b85b1aa38b3", 00:09:19.394 "strip_size_kb": 64, 00:09:19.394 "state": "online", 00:09:19.394 "raid_level": "raid0", 00:09:19.394 "superblock": true, 00:09:19.394 "num_base_bdevs": 4, 00:09:19.394 "num_base_bdevs_discovered": 4, 00:09:19.394 "num_base_bdevs_operational": 4, 00:09:19.394 "base_bdevs_list": [ 00:09:19.394 { 00:09:19.394 "name": "BaseBdev1", 00:09:19.394 "uuid": "fa2a99d0-6322-5a64-8c6f-e0d3f87697a5", 00:09:19.394 "is_configured": true, 00:09:19.394 "data_offset": 2048, 00:09:19.394 "data_size": 63488 00:09:19.394 }, 00:09:19.394 { 00:09:19.394 "name": "BaseBdev2", 00:09:19.394 "uuid": "87aae422-ea99-53e4-8cb2-7f7cbc66c01e", 00:09:19.394 "is_configured": true, 00:09:19.394 "data_offset": 2048, 00:09:19.394 "data_size": 63488 00:09:19.394 }, 00:09:19.394 { 00:09:19.394 "name": "BaseBdev3", 00:09:19.394 "uuid": "243cc578-7e50-5d44-8787-78d868efdc09", 00:09:19.394 "is_configured": true, 00:09:19.394 "data_offset": 2048, 00:09:19.394 "data_size": 63488 00:09:19.394 }, 00:09:19.394 { 00:09:19.394 "name": "BaseBdev4", 00:09:19.394 "uuid": "57013d69-d4da-5d68-8f05-75e8ce075e4e", 00:09:19.394 "is_configured": true, 00:09:19.394 "data_offset": 2048, 00:09:19.394 "data_size": 63488 00:09:19.394 } 00:09:19.394 ] 00:09:19.394 }' 00:09:19.394 18:40:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:19.394 18:40:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.015 18:40:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:20.015 18:40:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:20.015 [2024-12-15 18:40:20.205154] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:09:20.954 18:40:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:20.954 18:40:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.954 18:40:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.954 18:40:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.954 18:40:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:20.954 18:40:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:09:20.954 18:40:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:09:20.954 18:40:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:09:20.954 18:40:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:20.954 18:40:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:20.954 18:40:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:20.954 18:40:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:20.954 18:40:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:20.954 18:40:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:20.954 18:40:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:20.954 18:40:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:20.954 18:40:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:20.954 18:40:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.954 18:40:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.954 18:40:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:20.954 18:40:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.954 18:40:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.954 18:40:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:20.954 "name": "raid_bdev1", 00:09:20.954 "uuid": "f099ed56-9a47-4d88-955d-6b85b1aa38b3", 00:09:20.954 "strip_size_kb": 64, 00:09:20.954 "state": "online", 00:09:20.954 "raid_level": "raid0", 00:09:20.954 "superblock": true, 00:09:20.954 "num_base_bdevs": 4, 00:09:20.954 "num_base_bdevs_discovered": 4, 00:09:20.954 "num_base_bdevs_operational": 4, 00:09:20.954 "base_bdevs_list": [ 00:09:20.954 { 00:09:20.954 "name": "BaseBdev1", 00:09:20.954 "uuid": "fa2a99d0-6322-5a64-8c6f-e0d3f87697a5", 00:09:20.954 "is_configured": true, 00:09:20.954 "data_offset": 2048, 00:09:20.954 "data_size": 63488 00:09:20.954 }, 00:09:20.954 { 00:09:20.954 "name": "BaseBdev2", 00:09:20.954 "uuid": "87aae422-ea99-53e4-8cb2-7f7cbc66c01e", 00:09:20.954 "is_configured": true, 00:09:20.954 "data_offset": 2048, 00:09:20.954 "data_size": 63488 00:09:20.954 }, 00:09:20.954 { 00:09:20.954 "name": "BaseBdev3", 00:09:20.954 "uuid": "243cc578-7e50-5d44-8787-78d868efdc09", 00:09:20.954 "is_configured": true, 00:09:20.954 "data_offset": 2048, 00:09:20.954 "data_size": 63488 00:09:20.954 }, 00:09:20.954 { 00:09:20.954 "name": "BaseBdev4", 00:09:20.954 "uuid": "57013d69-d4da-5d68-8f05-75e8ce075e4e", 00:09:20.954 "is_configured": true, 00:09:20.954 "data_offset": 2048, 00:09:20.954 "data_size": 63488 00:09:20.954 } 00:09:20.954 ] 00:09:20.954 }' 00:09:20.954 18:40:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:20.954 18:40:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.214 18:40:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:21.214 18:40:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.214 18:40:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.214 [2024-12-15 18:40:21.504720] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:21.214 [2024-12-15 18:40:21.504827] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:21.214 [2024-12-15 18:40:21.507530] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:21.214 [2024-12-15 18:40:21.507639] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:21.214 [2024-12-15 18:40:21.507706] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:21.214 [2024-12-15 18:40:21.507765] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state offline 00:09:21.214 { 00:09:21.214 "results": [ 00:09:21.214 { 00:09:21.214 "job": "raid_bdev1", 00:09:21.214 "core_mask": "0x1", 00:09:21.214 "workload": "randrw", 00:09:21.214 "percentage": 50, 00:09:21.214 "status": "finished", 00:09:21.214 "queue_depth": 1, 00:09:21.214 "io_size": 131072, 00:09:21.214 "runtime": 1.300406, 00:09:21.214 "iops": 15750.46562381287, 00:09:21.214 "mibps": 1968.8082029766088, 00:09:21.214 "io_failed": 1, 00:09:21.214 "io_timeout": 0, 00:09:21.214 "avg_latency_us": 87.85473436593601, 00:09:21.214 "min_latency_us": 26.494323144104804, 00:09:21.214 "max_latency_us": 1380.8349344978167 00:09:21.214 } 00:09:21.214 ], 00:09:21.214 "core_count": 1 00:09:21.214 } 00:09:21.214 18:40:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.214 18:40:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 83827 00:09:21.214 18:40:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 83827 ']' 00:09:21.214 18:40:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 83827 00:09:21.214 18:40:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:09:21.214 18:40:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:21.214 18:40:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83827 00:09:21.214 18:40:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:21.214 killing process with pid 83827 00:09:21.214 18:40:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:21.214 18:40:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83827' 00:09:21.214 18:40:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 83827 00:09:21.214 [2024-12-15 18:40:21.554911] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:21.214 18:40:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 83827 00:09:21.214 [2024-12-15 18:40:21.591354] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:21.474 18:40:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.uTPny69cjn 00:09:21.474 18:40:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:21.474 18:40:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:21.474 18:40:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.77 00:09:21.474 18:40:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:09:21.474 18:40:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:21.474 18:40:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:21.474 18:40:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.77 != \0\.\0\0 ]] 00:09:21.474 00:09:21.474 real 0m3.292s 00:09:21.474 user 0m4.102s 00:09:21.474 sys 0m0.569s 00:09:21.474 ************************************ 00:09:21.474 END TEST raid_read_error_test 00:09:21.474 ************************************ 00:09:21.474 18:40:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:21.474 18:40:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.474 18:40:21 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:09:21.474 18:40:21 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:21.474 18:40:21 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:21.474 18:40:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:21.474 ************************************ 00:09:21.474 START TEST raid_write_error_test 00:09:21.474 ************************************ 00:09:21.474 18:40:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 write 00:09:21.474 18:40:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:09:21.474 18:40:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:09:21.474 18:40:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:21.475 18:40:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:21.475 18:40:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:21.475 18:40:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:21.475 18:40:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:21.475 18:40:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:21.475 18:40:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:21.475 18:40:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:21.475 18:40:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:21.475 18:40:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:21.475 18:40:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:21.475 18:40:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:21.475 18:40:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:09:21.475 18:40:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:21.475 18:40:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:21.475 18:40:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:09:21.475 18:40:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:21.475 18:40:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:21.475 18:40:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:21.475 18:40:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:21.475 18:40:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:21.475 18:40:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:21.475 18:40:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:09:21.475 18:40:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:21.475 18:40:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:21.475 18:40:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:21.475 18:40:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.tAhXYQAelC 00:09:21.475 18:40:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=83956 00:09:21.475 18:40:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:21.475 18:40:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 83956 00:09:21.735 18:40:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 83956 ']' 00:09:21.735 18:40:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:21.735 18:40:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:21.735 18:40:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:21.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:21.735 18:40:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:21.735 18:40:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.735 [2024-12-15 18:40:22.003680] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:09:21.735 [2024-12-15 18:40:22.003932] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83956 ] 00:09:21.995 [2024-12-15 18:40:22.179186] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:21.995 [2024-12-15 18:40:22.204913] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:21.995 [2024-12-15 18:40:22.247468] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:21.995 [2024-12-15 18:40:22.247505] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:22.565 18:40:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:22.565 18:40:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:22.565 18:40:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:22.565 18:40:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:22.565 18:40:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.565 18:40:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.565 BaseBdev1_malloc 00:09:22.565 18:40:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.565 18:40:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:22.565 18:40:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.565 18:40:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.565 true 00:09:22.565 18:40:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.565 18:40:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:22.565 18:40:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.565 18:40:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.565 [2024-12-15 18:40:22.867061] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:22.565 [2024-12-15 18:40:22.867165] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:22.565 [2024-12-15 18:40:22.867213] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:22.565 [2024-12-15 18:40:22.867244] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:22.565 [2024-12-15 18:40:22.869348] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:22.565 [2024-12-15 18:40:22.869418] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:22.565 BaseBdev1 00:09:22.565 18:40:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.565 18:40:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:22.565 18:40:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:22.565 18:40:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.565 18:40:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.565 BaseBdev2_malloc 00:09:22.565 18:40:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.565 18:40:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:22.565 18:40:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.565 18:40:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.565 true 00:09:22.565 18:40:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.565 18:40:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:22.565 18:40:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.565 18:40:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.565 [2024-12-15 18:40:22.907531] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:22.565 [2024-12-15 18:40:22.907621] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:22.565 [2024-12-15 18:40:22.907663] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:22.565 [2024-12-15 18:40:22.907692] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:22.565 [2024-12-15 18:40:22.909706] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:22.565 [2024-12-15 18:40:22.909789] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:22.565 BaseBdev2 00:09:22.565 18:40:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.565 18:40:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:22.565 18:40:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:22.565 18:40:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.565 18:40:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.565 BaseBdev3_malloc 00:09:22.565 18:40:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.565 18:40:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:22.565 18:40:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.565 18:40:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.565 true 00:09:22.565 18:40:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.565 18:40:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:22.565 18:40:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.565 18:40:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.565 [2024-12-15 18:40:22.948084] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:22.565 [2024-12-15 18:40:22.948169] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:22.565 [2024-12-15 18:40:22.948195] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:22.565 [2024-12-15 18:40:22.948204] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:22.565 [2024-12-15 18:40:22.950281] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:22.565 [2024-12-15 18:40:22.950317] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:22.565 BaseBdev3 00:09:22.565 18:40:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.565 18:40:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:22.565 18:40:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:09:22.565 18:40:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.565 18:40:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.565 BaseBdev4_malloc 00:09:22.565 18:40:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.565 18:40:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:09:22.565 18:40:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.565 18:40:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.565 true 00:09:22.565 18:40:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.565 18:40:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:09:22.565 18:40:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.565 18:40:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.565 [2024-12-15 18:40:22.999725] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:09:22.565 [2024-12-15 18:40:22.999825] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:22.565 [2024-12-15 18:40:22.999867] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:22.565 [2024-12-15 18:40:22.999900] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:22.565 [2024-12-15 18:40:23.002068] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:22.565 [2024-12-15 18:40:23.002109] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:09:22.825 BaseBdev4 00:09:22.825 18:40:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.825 18:40:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:09:22.825 18:40:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.825 18:40:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.825 [2024-12-15 18:40:23.011779] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:22.825 [2024-12-15 18:40:23.013850] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:22.825 [2024-12-15 18:40:23.013982] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:22.825 [2024-12-15 18:40:23.014061] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:22.825 [2024-12-15 18:40:23.014294] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007080 00:09:22.825 [2024-12-15 18:40:23.014345] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:09:22.825 [2024-12-15 18:40:23.014628] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:09:22.825 [2024-12-15 18:40:23.014813] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007080 00:09:22.825 [2024-12-15 18:40:23.014859] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007080 00:09:22.825 [2024-12-15 18:40:23.015023] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:22.825 18:40:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.825 18:40:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:09:22.825 18:40:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:22.825 18:40:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:22.826 18:40:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:22.826 18:40:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:22.826 18:40:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:22.826 18:40:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:22.826 18:40:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:22.826 18:40:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:22.826 18:40:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:22.826 18:40:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:22.826 18:40:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:22.826 18:40:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.826 18:40:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.826 18:40:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.826 18:40:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:22.826 "name": "raid_bdev1", 00:09:22.826 "uuid": "173a9116-70b8-445e-8a6b-b5fe65a53bc5", 00:09:22.826 "strip_size_kb": 64, 00:09:22.826 "state": "online", 00:09:22.826 "raid_level": "raid0", 00:09:22.826 "superblock": true, 00:09:22.826 "num_base_bdevs": 4, 00:09:22.826 "num_base_bdevs_discovered": 4, 00:09:22.826 "num_base_bdevs_operational": 4, 00:09:22.826 "base_bdevs_list": [ 00:09:22.826 { 00:09:22.826 "name": "BaseBdev1", 00:09:22.826 "uuid": "1c6f6a06-8b0b-5c4e-bf29-688cf601c157", 00:09:22.826 "is_configured": true, 00:09:22.826 "data_offset": 2048, 00:09:22.826 "data_size": 63488 00:09:22.826 }, 00:09:22.826 { 00:09:22.826 "name": "BaseBdev2", 00:09:22.826 "uuid": "6c6a6d09-1434-563a-ae96-03b717845325", 00:09:22.826 "is_configured": true, 00:09:22.826 "data_offset": 2048, 00:09:22.826 "data_size": 63488 00:09:22.826 }, 00:09:22.826 { 00:09:22.826 "name": "BaseBdev3", 00:09:22.826 "uuid": "fa1179ba-15e7-50d9-9bf2-9ffc1bd11a9c", 00:09:22.826 "is_configured": true, 00:09:22.826 "data_offset": 2048, 00:09:22.826 "data_size": 63488 00:09:22.826 }, 00:09:22.826 { 00:09:22.826 "name": "BaseBdev4", 00:09:22.826 "uuid": "7bb36d2b-50b5-5400-82dd-012bf6340ca6", 00:09:22.826 "is_configured": true, 00:09:22.826 "data_offset": 2048, 00:09:22.826 "data_size": 63488 00:09:22.826 } 00:09:22.826 ] 00:09:22.826 }' 00:09:22.826 18:40:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:22.826 18:40:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.085 18:40:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:23.085 18:40:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:23.345 [2024-12-15 18:40:23.535227] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:09:24.283 18:40:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:24.283 18:40:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.283 18:40:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.283 18:40:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.283 18:40:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:24.283 18:40:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:09:24.283 18:40:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:09:24.283 18:40:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:09:24.283 18:40:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:24.283 18:40:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:24.283 18:40:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:24.283 18:40:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:24.283 18:40:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:24.283 18:40:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:24.283 18:40:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:24.283 18:40:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:24.283 18:40:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:24.283 18:40:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:24.283 18:40:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:24.283 18:40:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.283 18:40:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.283 18:40:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.284 18:40:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:24.284 "name": "raid_bdev1", 00:09:24.284 "uuid": "173a9116-70b8-445e-8a6b-b5fe65a53bc5", 00:09:24.284 "strip_size_kb": 64, 00:09:24.284 "state": "online", 00:09:24.284 "raid_level": "raid0", 00:09:24.284 "superblock": true, 00:09:24.284 "num_base_bdevs": 4, 00:09:24.284 "num_base_bdevs_discovered": 4, 00:09:24.284 "num_base_bdevs_operational": 4, 00:09:24.284 "base_bdevs_list": [ 00:09:24.284 { 00:09:24.284 "name": "BaseBdev1", 00:09:24.284 "uuid": "1c6f6a06-8b0b-5c4e-bf29-688cf601c157", 00:09:24.284 "is_configured": true, 00:09:24.284 "data_offset": 2048, 00:09:24.284 "data_size": 63488 00:09:24.284 }, 00:09:24.284 { 00:09:24.284 "name": "BaseBdev2", 00:09:24.284 "uuid": "6c6a6d09-1434-563a-ae96-03b717845325", 00:09:24.284 "is_configured": true, 00:09:24.284 "data_offset": 2048, 00:09:24.284 "data_size": 63488 00:09:24.284 }, 00:09:24.284 { 00:09:24.284 "name": "BaseBdev3", 00:09:24.284 "uuid": "fa1179ba-15e7-50d9-9bf2-9ffc1bd11a9c", 00:09:24.284 "is_configured": true, 00:09:24.284 "data_offset": 2048, 00:09:24.284 "data_size": 63488 00:09:24.284 }, 00:09:24.284 { 00:09:24.284 "name": "BaseBdev4", 00:09:24.284 "uuid": "7bb36d2b-50b5-5400-82dd-012bf6340ca6", 00:09:24.284 "is_configured": true, 00:09:24.284 "data_offset": 2048, 00:09:24.284 "data_size": 63488 00:09:24.284 } 00:09:24.284 ] 00:09:24.284 }' 00:09:24.284 18:40:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:24.284 18:40:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.544 18:40:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:24.544 18:40:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.544 18:40:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.544 [2024-12-15 18:40:24.951282] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:24.544 [2024-12-15 18:40:24.951371] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:24.544 [2024-12-15 18:40:24.953975] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:24.544 [2024-12-15 18:40:24.954065] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:24.544 [2024-12-15 18:40:24.954130] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:24.544 [2024-12-15 18:40:24.954170] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state offline 00:09:24.544 { 00:09:24.544 "results": [ 00:09:24.544 { 00:09:24.544 "job": "raid_bdev1", 00:09:24.544 "core_mask": "0x1", 00:09:24.544 "workload": "randrw", 00:09:24.544 "percentage": 50, 00:09:24.544 "status": "finished", 00:09:24.544 "queue_depth": 1, 00:09:24.544 "io_size": 131072, 00:09:24.544 "runtime": 1.41692, 00:09:24.544 "iops": 16018.547271546735, 00:09:24.544 "mibps": 2002.3184089433419, 00:09:24.544 "io_failed": 1, 00:09:24.544 "io_timeout": 0, 00:09:24.544 "avg_latency_us": 86.38910932652435, 00:09:24.544 "min_latency_us": 25.2646288209607, 00:09:24.544 "max_latency_us": 1502.46288209607 00:09:24.544 } 00:09:24.544 ], 00:09:24.544 "core_count": 1 00:09:24.544 } 00:09:24.544 18:40:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.544 18:40:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 83956 00:09:24.544 18:40:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 83956 ']' 00:09:24.544 18:40:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 83956 00:09:24.544 18:40:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:09:24.544 18:40:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:24.544 18:40:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83956 00:09:24.806 killing process with pid 83956 00:09:24.806 18:40:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:24.806 18:40:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:24.806 18:40:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83956' 00:09:24.806 18:40:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 83956 00:09:24.806 [2024-12-15 18:40:24.999496] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:24.806 18:40:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 83956 00:09:24.806 [2024-12-15 18:40:25.035385] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:24.806 18:40:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.tAhXYQAelC 00:09:24.806 18:40:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:25.069 18:40:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:25.069 18:40:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:09:25.069 18:40:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:09:25.069 18:40:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:25.069 18:40:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:25.069 ************************************ 00:09:25.069 END TEST raid_write_error_test 00:09:25.069 ************************************ 00:09:25.069 18:40:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:09:25.069 00:09:25.069 real 0m3.365s 00:09:25.069 user 0m4.235s 00:09:25.069 sys 0m0.588s 00:09:25.069 18:40:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:25.069 18:40:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.069 18:40:25 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:25.069 18:40:25 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:09:25.069 18:40:25 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:25.069 18:40:25 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:25.069 18:40:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:25.069 ************************************ 00:09:25.069 START TEST raid_state_function_test 00:09:25.069 ************************************ 00:09:25.069 18:40:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 false 00:09:25.069 18:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:09:25.069 18:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:09:25.069 18:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:25.069 18:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:25.069 18:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:25.069 18:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:25.069 18:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:25.069 18:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:25.069 18:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:25.069 18:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:25.069 18:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:25.069 18:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:25.069 18:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:25.070 18:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:25.070 18:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:25.070 18:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:09:25.070 18:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:25.070 18:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:25.070 18:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:09:25.070 18:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:25.070 18:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:25.070 18:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:25.070 18:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:25.070 18:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:25.070 18:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:09:25.070 18:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:25.070 18:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:25.070 18:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:25.070 18:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:25.070 18:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=84089 00:09:25.070 18:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:25.070 18:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 84089' 00:09:25.070 Process raid pid: 84089 00:09:25.070 18:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 84089 00:09:25.070 18:40:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 84089 ']' 00:09:25.070 18:40:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:25.070 18:40:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:25.070 18:40:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:25.070 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:25.070 18:40:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:25.070 18:40:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.070 [2024-12-15 18:40:25.437177] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:09:25.070 [2024-12-15 18:40:25.437484] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:25.330 [2024-12-15 18:40:25.627944] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:25.330 [2024-12-15 18:40:25.654477] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:25.330 [2024-12-15 18:40:25.697702] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:25.330 [2024-12-15 18:40:25.697851] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:25.901 18:40:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:25.901 18:40:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:09:25.901 18:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:25.901 18:40:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.901 18:40:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.901 [2024-12-15 18:40:26.272646] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:25.901 [2024-12-15 18:40:26.272739] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:25.901 [2024-12-15 18:40:26.272791] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:25.901 [2024-12-15 18:40:26.272825] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:25.901 [2024-12-15 18:40:26.272846] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:25.901 [2024-12-15 18:40:26.272872] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:25.901 [2024-12-15 18:40:26.272890] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:25.901 [2024-12-15 18:40:26.272919] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:25.901 18:40:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.901 18:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:25.901 18:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:25.901 18:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:25.901 18:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:25.901 18:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:25.901 18:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:25.901 18:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:25.901 18:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:25.901 18:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:25.901 18:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:25.901 18:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:25.901 18:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:25.901 18:40:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.901 18:40:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.901 18:40:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.901 18:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:25.901 "name": "Existed_Raid", 00:09:25.901 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:25.901 "strip_size_kb": 64, 00:09:25.901 "state": "configuring", 00:09:25.901 "raid_level": "concat", 00:09:25.901 "superblock": false, 00:09:25.901 "num_base_bdevs": 4, 00:09:25.901 "num_base_bdevs_discovered": 0, 00:09:25.901 "num_base_bdevs_operational": 4, 00:09:25.901 "base_bdevs_list": [ 00:09:25.901 { 00:09:25.901 "name": "BaseBdev1", 00:09:25.901 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:25.901 "is_configured": false, 00:09:25.901 "data_offset": 0, 00:09:25.901 "data_size": 0 00:09:25.901 }, 00:09:25.901 { 00:09:25.901 "name": "BaseBdev2", 00:09:25.901 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:25.901 "is_configured": false, 00:09:25.901 "data_offset": 0, 00:09:25.901 "data_size": 0 00:09:25.901 }, 00:09:25.901 { 00:09:25.901 "name": "BaseBdev3", 00:09:25.901 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:25.901 "is_configured": false, 00:09:25.901 "data_offset": 0, 00:09:25.901 "data_size": 0 00:09:25.901 }, 00:09:25.901 { 00:09:25.901 "name": "BaseBdev4", 00:09:25.901 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:25.901 "is_configured": false, 00:09:25.901 "data_offset": 0, 00:09:25.901 "data_size": 0 00:09:25.901 } 00:09:25.901 ] 00:09:25.901 }' 00:09:25.902 18:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:25.902 18:40:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.488 18:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:26.488 18:40:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.488 18:40:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.488 [2024-12-15 18:40:26.687900] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:26.488 [2024-12-15 18:40:26.687989] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:09:26.488 18:40:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.488 18:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:26.488 18:40:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.488 18:40:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.488 [2024-12-15 18:40:26.699888] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:26.488 [2024-12-15 18:40:26.699965] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:26.488 [2024-12-15 18:40:26.699993] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:26.488 [2024-12-15 18:40:26.700014] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:26.488 [2024-12-15 18:40:26.700032] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:26.488 [2024-12-15 18:40:26.700053] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:26.488 [2024-12-15 18:40:26.700070] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:26.488 [2024-12-15 18:40:26.700090] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:26.488 18:40:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.488 18:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:26.488 18:40:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.488 18:40:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.488 [2024-12-15 18:40:26.720879] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:26.488 BaseBdev1 00:09:26.488 18:40:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.488 18:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:26.488 18:40:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:26.488 18:40:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:26.488 18:40:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:26.488 18:40:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:26.488 18:40:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:26.488 18:40:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:26.488 18:40:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.488 18:40:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.488 18:40:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.488 18:40:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:26.488 18:40:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.488 18:40:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.488 [ 00:09:26.488 { 00:09:26.488 "name": "BaseBdev1", 00:09:26.488 "aliases": [ 00:09:26.488 "b8c7d815-950b-4fe2-b5f5-33a31f001074" 00:09:26.488 ], 00:09:26.488 "product_name": "Malloc disk", 00:09:26.488 "block_size": 512, 00:09:26.488 "num_blocks": 65536, 00:09:26.488 "uuid": "b8c7d815-950b-4fe2-b5f5-33a31f001074", 00:09:26.488 "assigned_rate_limits": { 00:09:26.488 "rw_ios_per_sec": 0, 00:09:26.488 "rw_mbytes_per_sec": 0, 00:09:26.488 "r_mbytes_per_sec": 0, 00:09:26.488 "w_mbytes_per_sec": 0 00:09:26.488 }, 00:09:26.488 "claimed": true, 00:09:26.488 "claim_type": "exclusive_write", 00:09:26.488 "zoned": false, 00:09:26.488 "supported_io_types": { 00:09:26.488 "read": true, 00:09:26.488 "write": true, 00:09:26.488 "unmap": true, 00:09:26.488 "flush": true, 00:09:26.488 "reset": true, 00:09:26.488 "nvme_admin": false, 00:09:26.488 "nvme_io": false, 00:09:26.488 "nvme_io_md": false, 00:09:26.488 "write_zeroes": true, 00:09:26.488 "zcopy": true, 00:09:26.488 "get_zone_info": false, 00:09:26.488 "zone_management": false, 00:09:26.488 "zone_append": false, 00:09:26.488 "compare": false, 00:09:26.488 "compare_and_write": false, 00:09:26.488 "abort": true, 00:09:26.488 "seek_hole": false, 00:09:26.488 "seek_data": false, 00:09:26.488 "copy": true, 00:09:26.488 "nvme_iov_md": false 00:09:26.488 }, 00:09:26.488 "memory_domains": [ 00:09:26.488 { 00:09:26.488 "dma_device_id": "system", 00:09:26.488 "dma_device_type": 1 00:09:26.488 }, 00:09:26.488 { 00:09:26.488 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:26.488 "dma_device_type": 2 00:09:26.488 } 00:09:26.488 ], 00:09:26.488 "driver_specific": {} 00:09:26.488 } 00:09:26.488 ] 00:09:26.488 18:40:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.488 18:40:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:26.488 18:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:26.488 18:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:26.488 18:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:26.488 18:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:26.488 18:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:26.488 18:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:26.488 18:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:26.488 18:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:26.488 18:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:26.488 18:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:26.488 18:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:26.488 18:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:26.488 18:40:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.488 18:40:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.488 18:40:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.488 18:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:26.488 "name": "Existed_Raid", 00:09:26.488 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:26.488 "strip_size_kb": 64, 00:09:26.488 "state": "configuring", 00:09:26.488 "raid_level": "concat", 00:09:26.488 "superblock": false, 00:09:26.488 "num_base_bdevs": 4, 00:09:26.488 "num_base_bdevs_discovered": 1, 00:09:26.488 "num_base_bdevs_operational": 4, 00:09:26.488 "base_bdevs_list": [ 00:09:26.488 { 00:09:26.488 "name": "BaseBdev1", 00:09:26.488 "uuid": "b8c7d815-950b-4fe2-b5f5-33a31f001074", 00:09:26.488 "is_configured": true, 00:09:26.488 "data_offset": 0, 00:09:26.488 "data_size": 65536 00:09:26.488 }, 00:09:26.488 { 00:09:26.488 "name": "BaseBdev2", 00:09:26.488 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:26.488 "is_configured": false, 00:09:26.488 "data_offset": 0, 00:09:26.488 "data_size": 0 00:09:26.488 }, 00:09:26.488 { 00:09:26.488 "name": "BaseBdev3", 00:09:26.488 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:26.488 "is_configured": false, 00:09:26.488 "data_offset": 0, 00:09:26.488 "data_size": 0 00:09:26.489 }, 00:09:26.489 { 00:09:26.489 "name": "BaseBdev4", 00:09:26.489 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:26.489 "is_configured": false, 00:09:26.489 "data_offset": 0, 00:09:26.489 "data_size": 0 00:09:26.489 } 00:09:26.489 ] 00:09:26.489 }' 00:09:26.489 18:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:26.489 18:40:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.059 18:40:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:27.059 18:40:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.059 18:40:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.059 [2024-12-15 18:40:27.208076] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:27.059 [2024-12-15 18:40:27.208181] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:09:27.059 18:40:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.059 18:40:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:27.059 18:40:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.059 18:40:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.059 [2024-12-15 18:40:27.220122] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:27.059 [2024-12-15 18:40:27.222169] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:27.059 [2024-12-15 18:40:27.222266] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:27.059 [2024-12-15 18:40:27.222304] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:27.059 [2024-12-15 18:40:27.222332] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:27.059 [2024-12-15 18:40:27.222363] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:27.059 [2024-12-15 18:40:27.222398] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:27.059 18:40:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.059 18:40:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:27.059 18:40:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:27.059 18:40:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:27.059 18:40:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:27.059 18:40:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:27.059 18:40:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:27.059 18:40:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:27.059 18:40:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:27.059 18:40:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:27.059 18:40:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:27.059 18:40:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:27.059 18:40:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:27.059 18:40:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:27.059 18:40:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.059 18:40:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.059 18:40:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:27.059 18:40:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.059 18:40:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:27.059 "name": "Existed_Raid", 00:09:27.059 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:27.059 "strip_size_kb": 64, 00:09:27.059 "state": "configuring", 00:09:27.059 "raid_level": "concat", 00:09:27.059 "superblock": false, 00:09:27.059 "num_base_bdevs": 4, 00:09:27.059 "num_base_bdevs_discovered": 1, 00:09:27.059 "num_base_bdevs_operational": 4, 00:09:27.059 "base_bdevs_list": [ 00:09:27.059 { 00:09:27.059 "name": "BaseBdev1", 00:09:27.059 "uuid": "b8c7d815-950b-4fe2-b5f5-33a31f001074", 00:09:27.059 "is_configured": true, 00:09:27.059 "data_offset": 0, 00:09:27.059 "data_size": 65536 00:09:27.059 }, 00:09:27.059 { 00:09:27.059 "name": "BaseBdev2", 00:09:27.059 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:27.059 "is_configured": false, 00:09:27.059 "data_offset": 0, 00:09:27.059 "data_size": 0 00:09:27.059 }, 00:09:27.059 { 00:09:27.059 "name": "BaseBdev3", 00:09:27.059 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:27.059 "is_configured": false, 00:09:27.059 "data_offset": 0, 00:09:27.059 "data_size": 0 00:09:27.059 }, 00:09:27.059 { 00:09:27.059 "name": "BaseBdev4", 00:09:27.059 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:27.059 "is_configured": false, 00:09:27.059 "data_offset": 0, 00:09:27.059 "data_size": 0 00:09:27.059 } 00:09:27.059 ] 00:09:27.059 }' 00:09:27.059 18:40:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:27.059 18:40:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.320 18:40:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:27.320 18:40:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.320 18:40:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.320 [2024-12-15 18:40:27.690357] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:27.320 BaseBdev2 00:09:27.320 18:40:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.320 18:40:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:27.320 18:40:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:27.320 18:40:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:27.320 18:40:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:27.320 18:40:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:27.320 18:40:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:27.320 18:40:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:27.320 18:40:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.320 18:40:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.320 18:40:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.320 18:40:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:27.320 18:40:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.320 18:40:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.320 [ 00:09:27.320 { 00:09:27.320 "name": "BaseBdev2", 00:09:27.320 "aliases": [ 00:09:27.320 "f21415eb-1ab2-48cd-bb68-533b6a2f17d7" 00:09:27.320 ], 00:09:27.320 "product_name": "Malloc disk", 00:09:27.320 "block_size": 512, 00:09:27.320 "num_blocks": 65536, 00:09:27.320 "uuid": "f21415eb-1ab2-48cd-bb68-533b6a2f17d7", 00:09:27.320 "assigned_rate_limits": { 00:09:27.320 "rw_ios_per_sec": 0, 00:09:27.320 "rw_mbytes_per_sec": 0, 00:09:27.320 "r_mbytes_per_sec": 0, 00:09:27.320 "w_mbytes_per_sec": 0 00:09:27.320 }, 00:09:27.320 "claimed": true, 00:09:27.320 "claim_type": "exclusive_write", 00:09:27.320 "zoned": false, 00:09:27.320 "supported_io_types": { 00:09:27.320 "read": true, 00:09:27.320 "write": true, 00:09:27.320 "unmap": true, 00:09:27.320 "flush": true, 00:09:27.320 "reset": true, 00:09:27.320 "nvme_admin": false, 00:09:27.320 "nvme_io": false, 00:09:27.320 "nvme_io_md": false, 00:09:27.320 "write_zeroes": true, 00:09:27.320 "zcopy": true, 00:09:27.320 "get_zone_info": false, 00:09:27.320 "zone_management": false, 00:09:27.320 "zone_append": false, 00:09:27.320 "compare": false, 00:09:27.320 "compare_and_write": false, 00:09:27.320 "abort": true, 00:09:27.320 "seek_hole": false, 00:09:27.320 "seek_data": false, 00:09:27.320 "copy": true, 00:09:27.320 "nvme_iov_md": false 00:09:27.320 }, 00:09:27.320 "memory_domains": [ 00:09:27.320 { 00:09:27.320 "dma_device_id": "system", 00:09:27.320 "dma_device_type": 1 00:09:27.320 }, 00:09:27.320 { 00:09:27.320 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:27.320 "dma_device_type": 2 00:09:27.320 } 00:09:27.320 ], 00:09:27.320 "driver_specific": {} 00:09:27.320 } 00:09:27.320 ] 00:09:27.320 18:40:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.320 18:40:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:27.320 18:40:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:27.320 18:40:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:27.320 18:40:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:27.320 18:40:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:27.320 18:40:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:27.320 18:40:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:27.320 18:40:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:27.320 18:40:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:27.320 18:40:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:27.320 18:40:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:27.320 18:40:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:27.320 18:40:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:27.320 18:40:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:27.320 18:40:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.320 18:40:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:27.320 18:40:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.320 18:40:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.580 18:40:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:27.580 "name": "Existed_Raid", 00:09:27.580 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:27.580 "strip_size_kb": 64, 00:09:27.580 "state": "configuring", 00:09:27.580 "raid_level": "concat", 00:09:27.580 "superblock": false, 00:09:27.580 "num_base_bdevs": 4, 00:09:27.580 "num_base_bdevs_discovered": 2, 00:09:27.580 "num_base_bdevs_operational": 4, 00:09:27.580 "base_bdevs_list": [ 00:09:27.580 { 00:09:27.580 "name": "BaseBdev1", 00:09:27.580 "uuid": "b8c7d815-950b-4fe2-b5f5-33a31f001074", 00:09:27.580 "is_configured": true, 00:09:27.580 "data_offset": 0, 00:09:27.580 "data_size": 65536 00:09:27.580 }, 00:09:27.580 { 00:09:27.580 "name": "BaseBdev2", 00:09:27.580 "uuid": "f21415eb-1ab2-48cd-bb68-533b6a2f17d7", 00:09:27.580 "is_configured": true, 00:09:27.580 "data_offset": 0, 00:09:27.580 "data_size": 65536 00:09:27.580 }, 00:09:27.580 { 00:09:27.580 "name": "BaseBdev3", 00:09:27.580 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:27.580 "is_configured": false, 00:09:27.580 "data_offset": 0, 00:09:27.580 "data_size": 0 00:09:27.580 }, 00:09:27.580 { 00:09:27.580 "name": "BaseBdev4", 00:09:27.580 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:27.580 "is_configured": false, 00:09:27.580 "data_offset": 0, 00:09:27.580 "data_size": 0 00:09:27.580 } 00:09:27.580 ] 00:09:27.580 }' 00:09:27.580 18:40:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:27.580 18:40:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.840 18:40:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:27.840 18:40:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.840 18:40:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.840 [2024-12-15 18:40:28.187851] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:27.840 BaseBdev3 00:09:27.840 18:40:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.840 18:40:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:27.840 18:40:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:27.840 18:40:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:27.840 18:40:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:27.840 18:40:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:27.840 18:40:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:27.841 18:40:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:27.841 18:40:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.841 18:40:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.841 18:40:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.841 18:40:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:27.841 18:40:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.841 18:40:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.841 [ 00:09:27.841 { 00:09:27.841 "name": "BaseBdev3", 00:09:27.841 "aliases": [ 00:09:27.841 "ed54485d-44a0-440d-a0db-3bd578bd300d" 00:09:27.841 ], 00:09:27.841 "product_name": "Malloc disk", 00:09:27.841 "block_size": 512, 00:09:27.841 "num_blocks": 65536, 00:09:27.841 "uuid": "ed54485d-44a0-440d-a0db-3bd578bd300d", 00:09:27.841 "assigned_rate_limits": { 00:09:27.841 "rw_ios_per_sec": 0, 00:09:27.841 "rw_mbytes_per_sec": 0, 00:09:27.841 "r_mbytes_per_sec": 0, 00:09:27.841 "w_mbytes_per_sec": 0 00:09:27.841 }, 00:09:27.841 "claimed": true, 00:09:27.841 "claim_type": "exclusive_write", 00:09:27.841 "zoned": false, 00:09:27.841 "supported_io_types": { 00:09:27.841 "read": true, 00:09:27.841 "write": true, 00:09:27.841 "unmap": true, 00:09:27.841 "flush": true, 00:09:27.841 "reset": true, 00:09:27.841 "nvme_admin": false, 00:09:27.841 "nvme_io": false, 00:09:27.841 "nvme_io_md": false, 00:09:27.841 "write_zeroes": true, 00:09:27.841 "zcopy": true, 00:09:27.841 "get_zone_info": false, 00:09:27.841 "zone_management": false, 00:09:27.841 "zone_append": false, 00:09:27.841 "compare": false, 00:09:27.841 "compare_and_write": false, 00:09:27.841 "abort": true, 00:09:27.841 "seek_hole": false, 00:09:27.841 "seek_data": false, 00:09:27.841 "copy": true, 00:09:27.841 "nvme_iov_md": false 00:09:27.841 }, 00:09:27.841 "memory_domains": [ 00:09:27.841 { 00:09:27.841 "dma_device_id": "system", 00:09:27.841 "dma_device_type": 1 00:09:27.841 }, 00:09:27.841 { 00:09:27.841 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:27.841 "dma_device_type": 2 00:09:27.841 } 00:09:27.841 ], 00:09:27.841 "driver_specific": {} 00:09:27.841 } 00:09:27.841 ] 00:09:27.841 18:40:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.841 18:40:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:27.841 18:40:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:27.841 18:40:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:27.841 18:40:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:27.841 18:40:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:27.841 18:40:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:27.841 18:40:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:27.841 18:40:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:27.841 18:40:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:27.841 18:40:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:27.841 18:40:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:27.841 18:40:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:27.841 18:40:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:27.841 18:40:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:27.841 18:40:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.841 18:40:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.841 18:40:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:27.841 18:40:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.841 18:40:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:27.841 "name": "Existed_Raid", 00:09:27.841 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:27.841 "strip_size_kb": 64, 00:09:27.841 "state": "configuring", 00:09:27.841 "raid_level": "concat", 00:09:27.841 "superblock": false, 00:09:27.841 "num_base_bdevs": 4, 00:09:27.841 "num_base_bdevs_discovered": 3, 00:09:27.841 "num_base_bdevs_operational": 4, 00:09:27.841 "base_bdevs_list": [ 00:09:27.841 { 00:09:27.841 "name": "BaseBdev1", 00:09:27.841 "uuid": "b8c7d815-950b-4fe2-b5f5-33a31f001074", 00:09:27.841 "is_configured": true, 00:09:27.841 "data_offset": 0, 00:09:27.841 "data_size": 65536 00:09:27.841 }, 00:09:27.841 { 00:09:27.841 "name": "BaseBdev2", 00:09:27.841 "uuid": "f21415eb-1ab2-48cd-bb68-533b6a2f17d7", 00:09:27.841 "is_configured": true, 00:09:27.841 "data_offset": 0, 00:09:27.841 "data_size": 65536 00:09:27.841 }, 00:09:27.841 { 00:09:27.841 "name": "BaseBdev3", 00:09:27.841 "uuid": "ed54485d-44a0-440d-a0db-3bd578bd300d", 00:09:27.841 "is_configured": true, 00:09:27.841 "data_offset": 0, 00:09:27.841 "data_size": 65536 00:09:27.841 }, 00:09:27.841 { 00:09:27.841 "name": "BaseBdev4", 00:09:27.841 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:27.841 "is_configured": false, 00:09:27.841 "data_offset": 0, 00:09:27.841 "data_size": 0 00:09:27.841 } 00:09:27.841 ] 00:09:27.841 }' 00:09:27.841 18:40:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:27.841 18:40:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.411 18:40:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:09:28.411 18:40:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.411 18:40:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.411 [2024-12-15 18:40:28.670083] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:28.411 [2024-12-15 18:40:28.670200] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:09:28.411 [2024-12-15 18:40:28.670226] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:09:28.411 [2024-12-15 18:40:28.670526] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:28.411 [2024-12-15 18:40:28.670659] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:09:28.411 [2024-12-15 18:40:28.670677] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:09:28.411 [2024-12-15 18:40:28.670887] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:28.411 BaseBdev4 00:09:28.411 18:40:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.411 18:40:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:09:28.411 18:40:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:09:28.411 18:40:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:28.411 18:40:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:28.411 18:40:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:28.411 18:40:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:28.411 18:40:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:28.411 18:40:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.411 18:40:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.411 18:40:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.411 18:40:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:09:28.411 18:40:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.411 18:40:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.411 [ 00:09:28.411 { 00:09:28.411 "name": "BaseBdev4", 00:09:28.411 "aliases": [ 00:09:28.411 "8dc1e683-179b-459f-a859-b197ab8acca2" 00:09:28.411 ], 00:09:28.411 "product_name": "Malloc disk", 00:09:28.411 "block_size": 512, 00:09:28.411 "num_blocks": 65536, 00:09:28.411 "uuid": "8dc1e683-179b-459f-a859-b197ab8acca2", 00:09:28.411 "assigned_rate_limits": { 00:09:28.411 "rw_ios_per_sec": 0, 00:09:28.411 "rw_mbytes_per_sec": 0, 00:09:28.411 "r_mbytes_per_sec": 0, 00:09:28.411 "w_mbytes_per_sec": 0 00:09:28.411 }, 00:09:28.411 "claimed": true, 00:09:28.411 "claim_type": "exclusive_write", 00:09:28.411 "zoned": false, 00:09:28.411 "supported_io_types": { 00:09:28.411 "read": true, 00:09:28.411 "write": true, 00:09:28.411 "unmap": true, 00:09:28.411 "flush": true, 00:09:28.411 "reset": true, 00:09:28.411 "nvme_admin": false, 00:09:28.411 "nvme_io": false, 00:09:28.411 "nvme_io_md": false, 00:09:28.411 "write_zeroes": true, 00:09:28.411 "zcopy": true, 00:09:28.411 "get_zone_info": false, 00:09:28.411 "zone_management": false, 00:09:28.411 "zone_append": false, 00:09:28.411 "compare": false, 00:09:28.411 "compare_and_write": false, 00:09:28.411 "abort": true, 00:09:28.411 "seek_hole": false, 00:09:28.411 "seek_data": false, 00:09:28.411 "copy": true, 00:09:28.411 "nvme_iov_md": false 00:09:28.411 }, 00:09:28.411 "memory_domains": [ 00:09:28.411 { 00:09:28.411 "dma_device_id": "system", 00:09:28.411 "dma_device_type": 1 00:09:28.411 }, 00:09:28.411 { 00:09:28.411 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:28.411 "dma_device_type": 2 00:09:28.411 } 00:09:28.411 ], 00:09:28.411 "driver_specific": {} 00:09:28.411 } 00:09:28.411 ] 00:09:28.411 18:40:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.411 18:40:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:28.411 18:40:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:28.411 18:40:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:28.411 18:40:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:09:28.411 18:40:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:28.411 18:40:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:28.411 18:40:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:28.411 18:40:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:28.411 18:40:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:28.411 18:40:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:28.411 18:40:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:28.411 18:40:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:28.411 18:40:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:28.411 18:40:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.411 18:40:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:28.411 18:40:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.411 18:40:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.412 18:40:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.412 18:40:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:28.412 "name": "Existed_Raid", 00:09:28.412 "uuid": "43a9640e-f575-4685-820e-d6b4dfecb1e8", 00:09:28.412 "strip_size_kb": 64, 00:09:28.412 "state": "online", 00:09:28.412 "raid_level": "concat", 00:09:28.412 "superblock": false, 00:09:28.412 "num_base_bdevs": 4, 00:09:28.412 "num_base_bdevs_discovered": 4, 00:09:28.412 "num_base_bdevs_operational": 4, 00:09:28.412 "base_bdevs_list": [ 00:09:28.412 { 00:09:28.412 "name": "BaseBdev1", 00:09:28.412 "uuid": "b8c7d815-950b-4fe2-b5f5-33a31f001074", 00:09:28.412 "is_configured": true, 00:09:28.412 "data_offset": 0, 00:09:28.412 "data_size": 65536 00:09:28.412 }, 00:09:28.412 { 00:09:28.412 "name": "BaseBdev2", 00:09:28.412 "uuid": "f21415eb-1ab2-48cd-bb68-533b6a2f17d7", 00:09:28.412 "is_configured": true, 00:09:28.412 "data_offset": 0, 00:09:28.412 "data_size": 65536 00:09:28.412 }, 00:09:28.412 { 00:09:28.412 "name": "BaseBdev3", 00:09:28.412 "uuid": "ed54485d-44a0-440d-a0db-3bd578bd300d", 00:09:28.412 "is_configured": true, 00:09:28.412 "data_offset": 0, 00:09:28.412 "data_size": 65536 00:09:28.412 }, 00:09:28.412 { 00:09:28.412 "name": "BaseBdev4", 00:09:28.412 "uuid": "8dc1e683-179b-459f-a859-b197ab8acca2", 00:09:28.412 "is_configured": true, 00:09:28.412 "data_offset": 0, 00:09:28.412 "data_size": 65536 00:09:28.412 } 00:09:28.412 ] 00:09:28.412 }' 00:09:28.412 18:40:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:28.412 18:40:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.981 18:40:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:28.981 18:40:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:28.981 18:40:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:28.981 18:40:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:28.981 18:40:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:28.981 18:40:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:28.981 18:40:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:28.981 18:40:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:28.981 18:40:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.981 18:40:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.981 [2024-12-15 18:40:29.141700] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:28.981 18:40:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.981 18:40:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:28.981 "name": "Existed_Raid", 00:09:28.981 "aliases": [ 00:09:28.981 "43a9640e-f575-4685-820e-d6b4dfecb1e8" 00:09:28.981 ], 00:09:28.981 "product_name": "Raid Volume", 00:09:28.981 "block_size": 512, 00:09:28.981 "num_blocks": 262144, 00:09:28.981 "uuid": "43a9640e-f575-4685-820e-d6b4dfecb1e8", 00:09:28.981 "assigned_rate_limits": { 00:09:28.981 "rw_ios_per_sec": 0, 00:09:28.981 "rw_mbytes_per_sec": 0, 00:09:28.981 "r_mbytes_per_sec": 0, 00:09:28.981 "w_mbytes_per_sec": 0 00:09:28.981 }, 00:09:28.981 "claimed": false, 00:09:28.981 "zoned": false, 00:09:28.981 "supported_io_types": { 00:09:28.981 "read": true, 00:09:28.981 "write": true, 00:09:28.981 "unmap": true, 00:09:28.981 "flush": true, 00:09:28.981 "reset": true, 00:09:28.981 "nvme_admin": false, 00:09:28.981 "nvme_io": false, 00:09:28.981 "nvme_io_md": false, 00:09:28.981 "write_zeroes": true, 00:09:28.981 "zcopy": false, 00:09:28.981 "get_zone_info": false, 00:09:28.981 "zone_management": false, 00:09:28.981 "zone_append": false, 00:09:28.981 "compare": false, 00:09:28.981 "compare_and_write": false, 00:09:28.981 "abort": false, 00:09:28.981 "seek_hole": false, 00:09:28.981 "seek_data": false, 00:09:28.981 "copy": false, 00:09:28.981 "nvme_iov_md": false 00:09:28.981 }, 00:09:28.982 "memory_domains": [ 00:09:28.982 { 00:09:28.982 "dma_device_id": "system", 00:09:28.982 "dma_device_type": 1 00:09:28.982 }, 00:09:28.982 { 00:09:28.982 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:28.982 "dma_device_type": 2 00:09:28.982 }, 00:09:28.982 { 00:09:28.982 "dma_device_id": "system", 00:09:28.982 "dma_device_type": 1 00:09:28.982 }, 00:09:28.982 { 00:09:28.982 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:28.982 "dma_device_type": 2 00:09:28.982 }, 00:09:28.982 { 00:09:28.982 "dma_device_id": "system", 00:09:28.982 "dma_device_type": 1 00:09:28.982 }, 00:09:28.982 { 00:09:28.982 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:28.982 "dma_device_type": 2 00:09:28.982 }, 00:09:28.982 { 00:09:28.982 "dma_device_id": "system", 00:09:28.982 "dma_device_type": 1 00:09:28.982 }, 00:09:28.982 { 00:09:28.982 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:28.982 "dma_device_type": 2 00:09:28.982 } 00:09:28.982 ], 00:09:28.982 "driver_specific": { 00:09:28.982 "raid": { 00:09:28.982 "uuid": "43a9640e-f575-4685-820e-d6b4dfecb1e8", 00:09:28.982 "strip_size_kb": 64, 00:09:28.982 "state": "online", 00:09:28.982 "raid_level": "concat", 00:09:28.982 "superblock": false, 00:09:28.982 "num_base_bdevs": 4, 00:09:28.982 "num_base_bdevs_discovered": 4, 00:09:28.982 "num_base_bdevs_operational": 4, 00:09:28.982 "base_bdevs_list": [ 00:09:28.982 { 00:09:28.982 "name": "BaseBdev1", 00:09:28.982 "uuid": "b8c7d815-950b-4fe2-b5f5-33a31f001074", 00:09:28.982 "is_configured": true, 00:09:28.982 "data_offset": 0, 00:09:28.982 "data_size": 65536 00:09:28.982 }, 00:09:28.982 { 00:09:28.982 "name": "BaseBdev2", 00:09:28.982 "uuid": "f21415eb-1ab2-48cd-bb68-533b6a2f17d7", 00:09:28.982 "is_configured": true, 00:09:28.982 "data_offset": 0, 00:09:28.982 "data_size": 65536 00:09:28.982 }, 00:09:28.982 { 00:09:28.982 "name": "BaseBdev3", 00:09:28.982 "uuid": "ed54485d-44a0-440d-a0db-3bd578bd300d", 00:09:28.982 "is_configured": true, 00:09:28.982 "data_offset": 0, 00:09:28.982 "data_size": 65536 00:09:28.982 }, 00:09:28.982 { 00:09:28.982 "name": "BaseBdev4", 00:09:28.982 "uuid": "8dc1e683-179b-459f-a859-b197ab8acca2", 00:09:28.982 "is_configured": true, 00:09:28.982 "data_offset": 0, 00:09:28.982 "data_size": 65536 00:09:28.982 } 00:09:28.982 ] 00:09:28.982 } 00:09:28.982 } 00:09:28.982 }' 00:09:28.982 18:40:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:28.982 18:40:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:28.982 BaseBdev2 00:09:28.982 BaseBdev3 00:09:28.982 BaseBdev4' 00:09:28.982 18:40:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:28.982 18:40:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:28.982 18:40:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:28.982 18:40:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:28.982 18:40:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:28.982 18:40:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.982 18:40:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.982 18:40:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.982 18:40:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:28.982 18:40:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:28.982 18:40:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:28.982 18:40:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:28.982 18:40:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:28.982 18:40:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.982 18:40:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.982 18:40:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.982 18:40:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:28.982 18:40:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:28.982 18:40:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:28.982 18:40:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:28.982 18:40:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.982 18:40:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.982 18:40:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:28.982 18:40:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.982 18:40:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:28.982 18:40:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:28.982 18:40:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:28.982 18:40:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:09:28.982 18:40:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.982 18:40:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.982 18:40:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:28.982 18:40:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.242 18:40:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:29.243 18:40:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:29.243 18:40:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:29.243 18:40:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.243 18:40:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.243 [2024-12-15 18:40:29.432952] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:29.243 [2024-12-15 18:40:29.433025] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:29.243 [2024-12-15 18:40:29.433095] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:29.243 18:40:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.243 18:40:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:29.243 18:40:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:09:29.243 18:40:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:29.243 18:40:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:29.243 18:40:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:29.243 18:40:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:09:29.243 18:40:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:29.243 18:40:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:29.243 18:40:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:29.243 18:40:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:29.243 18:40:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:29.243 18:40:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:29.243 18:40:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:29.243 18:40:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:29.243 18:40:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:29.243 18:40:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:29.243 18:40:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:29.243 18:40:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.243 18:40:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.243 18:40:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.243 18:40:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:29.243 "name": "Existed_Raid", 00:09:29.243 "uuid": "43a9640e-f575-4685-820e-d6b4dfecb1e8", 00:09:29.243 "strip_size_kb": 64, 00:09:29.243 "state": "offline", 00:09:29.243 "raid_level": "concat", 00:09:29.243 "superblock": false, 00:09:29.243 "num_base_bdevs": 4, 00:09:29.243 "num_base_bdevs_discovered": 3, 00:09:29.243 "num_base_bdevs_operational": 3, 00:09:29.243 "base_bdevs_list": [ 00:09:29.243 { 00:09:29.243 "name": null, 00:09:29.243 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:29.243 "is_configured": false, 00:09:29.243 "data_offset": 0, 00:09:29.243 "data_size": 65536 00:09:29.243 }, 00:09:29.243 { 00:09:29.243 "name": "BaseBdev2", 00:09:29.243 "uuid": "f21415eb-1ab2-48cd-bb68-533b6a2f17d7", 00:09:29.243 "is_configured": true, 00:09:29.243 "data_offset": 0, 00:09:29.243 "data_size": 65536 00:09:29.243 }, 00:09:29.243 { 00:09:29.243 "name": "BaseBdev3", 00:09:29.243 "uuid": "ed54485d-44a0-440d-a0db-3bd578bd300d", 00:09:29.243 "is_configured": true, 00:09:29.243 "data_offset": 0, 00:09:29.243 "data_size": 65536 00:09:29.243 }, 00:09:29.243 { 00:09:29.243 "name": "BaseBdev4", 00:09:29.243 "uuid": "8dc1e683-179b-459f-a859-b197ab8acca2", 00:09:29.243 "is_configured": true, 00:09:29.243 "data_offset": 0, 00:09:29.243 "data_size": 65536 00:09:29.243 } 00:09:29.243 ] 00:09:29.243 }' 00:09:29.243 18:40:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:29.243 18:40:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.503 18:40:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:29.503 18:40:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:29.503 18:40:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:29.503 18:40:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:29.503 18:40:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.503 18:40:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.503 18:40:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.780 18:40:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:29.780 18:40:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:29.780 18:40:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:29.780 18:40:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.780 18:40:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.780 [2024-12-15 18:40:29.963825] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:29.780 18:40:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.780 18:40:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:29.780 18:40:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:29.780 18:40:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:29.780 18:40:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:29.780 18:40:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.780 18:40:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.780 18:40:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.780 18:40:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:29.780 18:40:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:29.780 18:40:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:29.780 18:40:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.780 18:40:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.780 [2024-12-15 18:40:30.031126] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:29.780 18:40:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.780 18:40:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:29.780 18:40:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:29.780 18:40:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:29.780 18:40:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:29.780 18:40:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.780 18:40:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.780 18:40:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.780 18:40:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:29.780 18:40:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:29.780 18:40:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:09:29.780 18:40:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.780 18:40:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.780 [2024-12-15 18:40:30.093925] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:09:29.780 [2024-12-15 18:40:30.094009] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:09:29.780 18:40:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.780 18:40:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:29.780 18:40:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:29.780 18:40:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:29.780 18:40:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.780 18:40:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.780 18:40:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:29.780 18:40:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.780 18:40:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:29.780 18:40:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:29.780 18:40:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:09:29.780 18:40:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:29.780 18:40:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:29.780 18:40:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:29.780 18:40:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.781 18:40:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.781 BaseBdev2 00:09:29.781 18:40:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.781 18:40:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:29.781 18:40:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:29.781 18:40:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:29.781 18:40:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:29.781 18:40:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:29.781 18:40:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:29.781 18:40:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:29.781 18:40:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.781 18:40:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.781 18:40:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.781 18:40:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:29.781 18:40:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.781 18:40:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.781 [ 00:09:29.781 { 00:09:29.781 "name": "BaseBdev2", 00:09:29.781 "aliases": [ 00:09:29.781 "2b869371-b1f3-4e80-8f1e-52f9076416a0" 00:09:29.781 ], 00:09:29.781 "product_name": "Malloc disk", 00:09:29.781 "block_size": 512, 00:09:29.781 "num_blocks": 65536, 00:09:29.781 "uuid": "2b869371-b1f3-4e80-8f1e-52f9076416a0", 00:09:29.781 "assigned_rate_limits": { 00:09:29.781 "rw_ios_per_sec": 0, 00:09:29.781 "rw_mbytes_per_sec": 0, 00:09:29.781 "r_mbytes_per_sec": 0, 00:09:29.781 "w_mbytes_per_sec": 0 00:09:29.781 }, 00:09:29.781 "claimed": false, 00:09:29.781 "zoned": false, 00:09:29.781 "supported_io_types": { 00:09:29.781 "read": true, 00:09:29.781 "write": true, 00:09:29.781 "unmap": true, 00:09:29.781 "flush": true, 00:09:29.781 "reset": true, 00:09:29.781 "nvme_admin": false, 00:09:29.781 "nvme_io": false, 00:09:29.781 "nvme_io_md": false, 00:09:29.781 "write_zeroes": true, 00:09:29.781 "zcopy": true, 00:09:29.781 "get_zone_info": false, 00:09:29.781 "zone_management": false, 00:09:29.781 "zone_append": false, 00:09:29.781 "compare": false, 00:09:29.781 "compare_and_write": false, 00:09:29.781 "abort": true, 00:09:29.781 "seek_hole": false, 00:09:29.781 "seek_data": false, 00:09:29.781 "copy": true, 00:09:29.781 "nvme_iov_md": false 00:09:29.781 }, 00:09:29.781 "memory_domains": [ 00:09:29.781 { 00:09:29.781 "dma_device_id": "system", 00:09:29.781 "dma_device_type": 1 00:09:29.781 }, 00:09:29.781 { 00:09:29.781 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:29.781 "dma_device_type": 2 00:09:29.781 } 00:09:29.781 ], 00:09:29.781 "driver_specific": {} 00:09:29.781 } 00:09:29.781 ] 00:09:29.781 18:40:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.781 18:40:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:29.781 18:40:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:29.781 18:40:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:29.781 18:40:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:29.781 18:40:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.781 18:40:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.781 BaseBdev3 00:09:29.781 18:40:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.781 18:40:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:29.781 18:40:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:29.781 18:40:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:29.781 18:40:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:29.781 18:40:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:29.781 18:40:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:29.781 18:40:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:29.781 18:40:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.781 18:40:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.041 18:40:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.041 18:40:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:30.041 18:40:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.042 18:40:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.042 [ 00:09:30.042 { 00:09:30.042 "name": "BaseBdev3", 00:09:30.042 "aliases": [ 00:09:30.042 "0b4a8003-97f6-4a79-a1a3-3690463e54a3" 00:09:30.042 ], 00:09:30.042 "product_name": "Malloc disk", 00:09:30.042 "block_size": 512, 00:09:30.042 "num_blocks": 65536, 00:09:30.042 "uuid": "0b4a8003-97f6-4a79-a1a3-3690463e54a3", 00:09:30.042 "assigned_rate_limits": { 00:09:30.042 "rw_ios_per_sec": 0, 00:09:30.042 "rw_mbytes_per_sec": 0, 00:09:30.042 "r_mbytes_per_sec": 0, 00:09:30.042 "w_mbytes_per_sec": 0 00:09:30.042 }, 00:09:30.042 "claimed": false, 00:09:30.042 "zoned": false, 00:09:30.042 "supported_io_types": { 00:09:30.042 "read": true, 00:09:30.042 "write": true, 00:09:30.042 "unmap": true, 00:09:30.042 "flush": true, 00:09:30.042 "reset": true, 00:09:30.042 "nvme_admin": false, 00:09:30.042 "nvme_io": false, 00:09:30.042 "nvme_io_md": false, 00:09:30.042 "write_zeroes": true, 00:09:30.042 "zcopy": true, 00:09:30.042 "get_zone_info": false, 00:09:30.042 "zone_management": false, 00:09:30.042 "zone_append": false, 00:09:30.042 "compare": false, 00:09:30.042 "compare_and_write": false, 00:09:30.042 "abort": true, 00:09:30.042 "seek_hole": false, 00:09:30.042 "seek_data": false, 00:09:30.042 "copy": true, 00:09:30.042 "nvme_iov_md": false 00:09:30.042 }, 00:09:30.042 "memory_domains": [ 00:09:30.042 { 00:09:30.042 "dma_device_id": "system", 00:09:30.042 "dma_device_type": 1 00:09:30.042 }, 00:09:30.042 { 00:09:30.042 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:30.042 "dma_device_type": 2 00:09:30.042 } 00:09:30.042 ], 00:09:30.042 "driver_specific": {} 00:09:30.042 } 00:09:30.042 ] 00:09:30.042 18:40:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.042 18:40:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:30.042 18:40:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:30.042 18:40:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:30.042 18:40:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:09:30.042 18:40:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.042 18:40:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.042 BaseBdev4 00:09:30.042 18:40:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.042 18:40:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:09:30.042 18:40:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:09:30.042 18:40:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:30.042 18:40:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:30.042 18:40:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:30.042 18:40:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:30.042 18:40:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:30.042 18:40:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.042 18:40:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.042 18:40:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.042 18:40:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:09:30.042 18:40:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.042 18:40:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.042 [ 00:09:30.042 { 00:09:30.042 "name": "BaseBdev4", 00:09:30.042 "aliases": [ 00:09:30.042 "50e8a55c-f35c-43a1-bc66-aeec3e4d220b" 00:09:30.042 ], 00:09:30.042 "product_name": "Malloc disk", 00:09:30.042 "block_size": 512, 00:09:30.042 "num_blocks": 65536, 00:09:30.042 "uuid": "50e8a55c-f35c-43a1-bc66-aeec3e4d220b", 00:09:30.042 "assigned_rate_limits": { 00:09:30.042 "rw_ios_per_sec": 0, 00:09:30.042 "rw_mbytes_per_sec": 0, 00:09:30.042 "r_mbytes_per_sec": 0, 00:09:30.042 "w_mbytes_per_sec": 0 00:09:30.042 }, 00:09:30.042 "claimed": false, 00:09:30.042 "zoned": false, 00:09:30.042 "supported_io_types": { 00:09:30.042 "read": true, 00:09:30.042 "write": true, 00:09:30.042 "unmap": true, 00:09:30.042 "flush": true, 00:09:30.042 "reset": true, 00:09:30.042 "nvme_admin": false, 00:09:30.042 "nvme_io": false, 00:09:30.042 "nvme_io_md": false, 00:09:30.042 "write_zeroes": true, 00:09:30.042 "zcopy": true, 00:09:30.042 "get_zone_info": false, 00:09:30.042 "zone_management": false, 00:09:30.042 "zone_append": false, 00:09:30.042 "compare": false, 00:09:30.042 "compare_and_write": false, 00:09:30.042 "abort": true, 00:09:30.042 "seek_hole": false, 00:09:30.042 "seek_data": false, 00:09:30.042 "copy": true, 00:09:30.042 "nvme_iov_md": false 00:09:30.042 }, 00:09:30.042 "memory_domains": [ 00:09:30.042 { 00:09:30.042 "dma_device_id": "system", 00:09:30.042 "dma_device_type": 1 00:09:30.042 }, 00:09:30.042 { 00:09:30.042 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:30.042 "dma_device_type": 2 00:09:30.042 } 00:09:30.042 ], 00:09:30.042 "driver_specific": {} 00:09:30.042 } 00:09:30.042 ] 00:09:30.042 18:40:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.042 18:40:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:30.042 18:40:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:30.042 18:40:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:30.042 18:40:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:30.042 18:40:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.042 18:40:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.042 [2024-12-15 18:40:30.311360] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:30.042 [2024-12-15 18:40:30.311456] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:30.042 [2024-12-15 18:40:30.311498] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:30.042 [2024-12-15 18:40:30.313396] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:30.042 [2024-12-15 18:40:30.313486] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:30.042 18:40:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.042 18:40:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:30.042 18:40:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:30.042 18:40:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:30.042 18:40:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:30.042 18:40:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:30.042 18:40:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:30.042 18:40:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:30.042 18:40:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:30.042 18:40:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:30.042 18:40:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:30.042 18:40:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.042 18:40:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:30.042 18:40:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.042 18:40:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.042 18:40:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.042 18:40:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:30.042 "name": "Existed_Raid", 00:09:30.042 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:30.042 "strip_size_kb": 64, 00:09:30.042 "state": "configuring", 00:09:30.042 "raid_level": "concat", 00:09:30.042 "superblock": false, 00:09:30.043 "num_base_bdevs": 4, 00:09:30.043 "num_base_bdevs_discovered": 3, 00:09:30.043 "num_base_bdevs_operational": 4, 00:09:30.043 "base_bdevs_list": [ 00:09:30.043 { 00:09:30.043 "name": "BaseBdev1", 00:09:30.043 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:30.043 "is_configured": false, 00:09:30.043 "data_offset": 0, 00:09:30.043 "data_size": 0 00:09:30.043 }, 00:09:30.043 { 00:09:30.043 "name": "BaseBdev2", 00:09:30.043 "uuid": "2b869371-b1f3-4e80-8f1e-52f9076416a0", 00:09:30.043 "is_configured": true, 00:09:30.043 "data_offset": 0, 00:09:30.043 "data_size": 65536 00:09:30.043 }, 00:09:30.043 { 00:09:30.043 "name": "BaseBdev3", 00:09:30.043 "uuid": "0b4a8003-97f6-4a79-a1a3-3690463e54a3", 00:09:30.043 "is_configured": true, 00:09:30.043 "data_offset": 0, 00:09:30.043 "data_size": 65536 00:09:30.043 }, 00:09:30.043 { 00:09:30.043 "name": "BaseBdev4", 00:09:30.043 "uuid": "50e8a55c-f35c-43a1-bc66-aeec3e4d220b", 00:09:30.043 "is_configured": true, 00:09:30.043 "data_offset": 0, 00:09:30.043 "data_size": 65536 00:09:30.043 } 00:09:30.043 ] 00:09:30.043 }' 00:09:30.043 18:40:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:30.043 18:40:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.613 18:40:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:30.613 18:40:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.613 18:40:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.613 [2024-12-15 18:40:30.810558] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:30.613 18:40:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.613 18:40:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:30.613 18:40:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:30.613 18:40:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:30.613 18:40:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:30.613 18:40:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:30.613 18:40:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:30.613 18:40:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:30.613 18:40:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:30.613 18:40:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:30.613 18:40:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:30.613 18:40:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.613 18:40:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.613 18:40:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:30.613 18:40:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.613 18:40:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.613 18:40:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:30.613 "name": "Existed_Raid", 00:09:30.613 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:30.613 "strip_size_kb": 64, 00:09:30.613 "state": "configuring", 00:09:30.613 "raid_level": "concat", 00:09:30.613 "superblock": false, 00:09:30.613 "num_base_bdevs": 4, 00:09:30.613 "num_base_bdevs_discovered": 2, 00:09:30.613 "num_base_bdevs_operational": 4, 00:09:30.613 "base_bdevs_list": [ 00:09:30.613 { 00:09:30.613 "name": "BaseBdev1", 00:09:30.613 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:30.613 "is_configured": false, 00:09:30.613 "data_offset": 0, 00:09:30.613 "data_size": 0 00:09:30.613 }, 00:09:30.613 { 00:09:30.613 "name": null, 00:09:30.613 "uuid": "2b869371-b1f3-4e80-8f1e-52f9076416a0", 00:09:30.613 "is_configured": false, 00:09:30.613 "data_offset": 0, 00:09:30.613 "data_size": 65536 00:09:30.613 }, 00:09:30.613 { 00:09:30.613 "name": "BaseBdev3", 00:09:30.613 "uuid": "0b4a8003-97f6-4a79-a1a3-3690463e54a3", 00:09:30.613 "is_configured": true, 00:09:30.613 "data_offset": 0, 00:09:30.613 "data_size": 65536 00:09:30.613 }, 00:09:30.613 { 00:09:30.613 "name": "BaseBdev4", 00:09:30.613 "uuid": "50e8a55c-f35c-43a1-bc66-aeec3e4d220b", 00:09:30.613 "is_configured": true, 00:09:30.613 "data_offset": 0, 00:09:30.613 "data_size": 65536 00:09:30.613 } 00:09:30.613 ] 00:09:30.613 }' 00:09:30.613 18:40:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:30.613 18:40:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.874 18:40:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:30.874 18:40:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.874 18:40:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.874 18:40:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.874 18:40:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.874 18:40:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:30.874 18:40:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:30.874 18:40:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.874 18:40:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.874 [2024-12-15 18:40:31.312902] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:30.874 BaseBdev1 00:09:31.133 18:40:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.133 18:40:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:31.133 18:40:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:31.133 18:40:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:31.133 18:40:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:31.133 18:40:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:31.133 18:40:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:31.133 18:40:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:31.133 18:40:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.133 18:40:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.133 18:40:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.133 18:40:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:31.133 18:40:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.133 18:40:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.133 [ 00:09:31.133 { 00:09:31.133 "name": "BaseBdev1", 00:09:31.133 "aliases": [ 00:09:31.133 "2f8cc0e8-b23e-43e8-bf3f-b1c37e9eaa1c" 00:09:31.133 ], 00:09:31.133 "product_name": "Malloc disk", 00:09:31.133 "block_size": 512, 00:09:31.133 "num_blocks": 65536, 00:09:31.133 "uuid": "2f8cc0e8-b23e-43e8-bf3f-b1c37e9eaa1c", 00:09:31.133 "assigned_rate_limits": { 00:09:31.133 "rw_ios_per_sec": 0, 00:09:31.133 "rw_mbytes_per_sec": 0, 00:09:31.133 "r_mbytes_per_sec": 0, 00:09:31.133 "w_mbytes_per_sec": 0 00:09:31.133 }, 00:09:31.133 "claimed": true, 00:09:31.133 "claim_type": "exclusive_write", 00:09:31.133 "zoned": false, 00:09:31.133 "supported_io_types": { 00:09:31.133 "read": true, 00:09:31.133 "write": true, 00:09:31.133 "unmap": true, 00:09:31.133 "flush": true, 00:09:31.133 "reset": true, 00:09:31.133 "nvme_admin": false, 00:09:31.133 "nvme_io": false, 00:09:31.133 "nvme_io_md": false, 00:09:31.133 "write_zeroes": true, 00:09:31.133 "zcopy": true, 00:09:31.133 "get_zone_info": false, 00:09:31.133 "zone_management": false, 00:09:31.133 "zone_append": false, 00:09:31.133 "compare": false, 00:09:31.133 "compare_and_write": false, 00:09:31.133 "abort": true, 00:09:31.133 "seek_hole": false, 00:09:31.133 "seek_data": false, 00:09:31.133 "copy": true, 00:09:31.133 "nvme_iov_md": false 00:09:31.133 }, 00:09:31.133 "memory_domains": [ 00:09:31.133 { 00:09:31.133 "dma_device_id": "system", 00:09:31.133 "dma_device_type": 1 00:09:31.133 }, 00:09:31.133 { 00:09:31.133 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:31.133 "dma_device_type": 2 00:09:31.133 } 00:09:31.133 ], 00:09:31.133 "driver_specific": {} 00:09:31.133 } 00:09:31.133 ] 00:09:31.133 18:40:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.133 18:40:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:31.133 18:40:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:31.133 18:40:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:31.133 18:40:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:31.133 18:40:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:31.133 18:40:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:31.133 18:40:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:31.133 18:40:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:31.133 18:40:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:31.133 18:40:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:31.133 18:40:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:31.133 18:40:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:31.133 18:40:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:31.133 18:40:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.133 18:40:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.133 18:40:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.133 18:40:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:31.133 "name": "Existed_Raid", 00:09:31.133 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:31.134 "strip_size_kb": 64, 00:09:31.134 "state": "configuring", 00:09:31.134 "raid_level": "concat", 00:09:31.134 "superblock": false, 00:09:31.134 "num_base_bdevs": 4, 00:09:31.134 "num_base_bdevs_discovered": 3, 00:09:31.134 "num_base_bdevs_operational": 4, 00:09:31.134 "base_bdevs_list": [ 00:09:31.134 { 00:09:31.134 "name": "BaseBdev1", 00:09:31.134 "uuid": "2f8cc0e8-b23e-43e8-bf3f-b1c37e9eaa1c", 00:09:31.134 "is_configured": true, 00:09:31.134 "data_offset": 0, 00:09:31.134 "data_size": 65536 00:09:31.134 }, 00:09:31.134 { 00:09:31.134 "name": null, 00:09:31.134 "uuid": "2b869371-b1f3-4e80-8f1e-52f9076416a0", 00:09:31.134 "is_configured": false, 00:09:31.134 "data_offset": 0, 00:09:31.134 "data_size": 65536 00:09:31.134 }, 00:09:31.134 { 00:09:31.134 "name": "BaseBdev3", 00:09:31.134 "uuid": "0b4a8003-97f6-4a79-a1a3-3690463e54a3", 00:09:31.134 "is_configured": true, 00:09:31.134 "data_offset": 0, 00:09:31.134 "data_size": 65536 00:09:31.134 }, 00:09:31.134 { 00:09:31.134 "name": "BaseBdev4", 00:09:31.134 "uuid": "50e8a55c-f35c-43a1-bc66-aeec3e4d220b", 00:09:31.134 "is_configured": true, 00:09:31.134 "data_offset": 0, 00:09:31.134 "data_size": 65536 00:09:31.134 } 00:09:31.134 ] 00:09:31.134 }' 00:09:31.134 18:40:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:31.134 18:40:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.393 18:40:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:31.393 18:40:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:31.393 18:40:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.393 18:40:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.393 18:40:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.654 18:40:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:31.654 18:40:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:31.654 18:40:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.654 18:40:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.654 [2024-12-15 18:40:31.844086] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:31.654 18:40:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.654 18:40:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:31.654 18:40:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:31.654 18:40:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:31.654 18:40:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:31.654 18:40:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:31.654 18:40:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:31.654 18:40:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:31.654 18:40:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:31.654 18:40:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:31.654 18:40:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:31.654 18:40:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:31.654 18:40:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:31.654 18:40:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.654 18:40:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.654 18:40:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.654 18:40:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:31.654 "name": "Existed_Raid", 00:09:31.654 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:31.654 "strip_size_kb": 64, 00:09:31.654 "state": "configuring", 00:09:31.654 "raid_level": "concat", 00:09:31.654 "superblock": false, 00:09:31.654 "num_base_bdevs": 4, 00:09:31.654 "num_base_bdevs_discovered": 2, 00:09:31.654 "num_base_bdevs_operational": 4, 00:09:31.654 "base_bdevs_list": [ 00:09:31.654 { 00:09:31.654 "name": "BaseBdev1", 00:09:31.654 "uuid": "2f8cc0e8-b23e-43e8-bf3f-b1c37e9eaa1c", 00:09:31.654 "is_configured": true, 00:09:31.654 "data_offset": 0, 00:09:31.654 "data_size": 65536 00:09:31.654 }, 00:09:31.654 { 00:09:31.654 "name": null, 00:09:31.654 "uuid": "2b869371-b1f3-4e80-8f1e-52f9076416a0", 00:09:31.654 "is_configured": false, 00:09:31.654 "data_offset": 0, 00:09:31.654 "data_size": 65536 00:09:31.654 }, 00:09:31.654 { 00:09:31.654 "name": null, 00:09:31.654 "uuid": "0b4a8003-97f6-4a79-a1a3-3690463e54a3", 00:09:31.654 "is_configured": false, 00:09:31.654 "data_offset": 0, 00:09:31.654 "data_size": 65536 00:09:31.654 }, 00:09:31.654 { 00:09:31.654 "name": "BaseBdev4", 00:09:31.654 "uuid": "50e8a55c-f35c-43a1-bc66-aeec3e4d220b", 00:09:31.654 "is_configured": true, 00:09:31.654 "data_offset": 0, 00:09:31.654 "data_size": 65536 00:09:31.654 } 00:09:31.654 ] 00:09:31.654 }' 00:09:31.654 18:40:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:31.654 18:40:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.914 18:40:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:31.914 18:40:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:31.914 18:40:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.914 18:40:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.914 18:40:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.914 18:40:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:31.914 18:40:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:31.914 18:40:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.915 18:40:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.915 [2024-12-15 18:40:32.331343] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:31.915 18:40:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.915 18:40:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:31.915 18:40:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:31.915 18:40:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:31.915 18:40:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:31.915 18:40:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:31.915 18:40:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:31.915 18:40:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:31.915 18:40:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:31.915 18:40:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:31.915 18:40:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:31.915 18:40:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:31.915 18:40:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:31.915 18:40:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.915 18:40:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.183 18:40:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.183 18:40:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:32.183 "name": "Existed_Raid", 00:09:32.183 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:32.183 "strip_size_kb": 64, 00:09:32.183 "state": "configuring", 00:09:32.183 "raid_level": "concat", 00:09:32.183 "superblock": false, 00:09:32.183 "num_base_bdevs": 4, 00:09:32.183 "num_base_bdevs_discovered": 3, 00:09:32.183 "num_base_bdevs_operational": 4, 00:09:32.183 "base_bdevs_list": [ 00:09:32.183 { 00:09:32.183 "name": "BaseBdev1", 00:09:32.183 "uuid": "2f8cc0e8-b23e-43e8-bf3f-b1c37e9eaa1c", 00:09:32.183 "is_configured": true, 00:09:32.183 "data_offset": 0, 00:09:32.183 "data_size": 65536 00:09:32.183 }, 00:09:32.183 { 00:09:32.183 "name": null, 00:09:32.183 "uuid": "2b869371-b1f3-4e80-8f1e-52f9076416a0", 00:09:32.183 "is_configured": false, 00:09:32.183 "data_offset": 0, 00:09:32.183 "data_size": 65536 00:09:32.183 }, 00:09:32.183 { 00:09:32.183 "name": "BaseBdev3", 00:09:32.183 "uuid": "0b4a8003-97f6-4a79-a1a3-3690463e54a3", 00:09:32.183 "is_configured": true, 00:09:32.183 "data_offset": 0, 00:09:32.183 "data_size": 65536 00:09:32.183 }, 00:09:32.183 { 00:09:32.183 "name": "BaseBdev4", 00:09:32.183 "uuid": "50e8a55c-f35c-43a1-bc66-aeec3e4d220b", 00:09:32.183 "is_configured": true, 00:09:32.183 "data_offset": 0, 00:09:32.183 "data_size": 65536 00:09:32.183 } 00:09:32.183 ] 00:09:32.183 }' 00:09:32.183 18:40:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:32.183 18:40:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.446 18:40:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:32.446 18:40:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:32.446 18:40:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.446 18:40:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.446 18:40:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.446 18:40:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:32.446 18:40:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:32.446 18:40:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.446 18:40:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.446 [2024-12-15 18:40:32.846423] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:32.446 18:40:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.446 18:40:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:32.446 18:40:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:32.446 18:40:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:32.446 18:40:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:32.446 18:40:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:32.446 18:40:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:32.446 18:40:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:32.446 18:40:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:32.446 18:40:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:32.446 18:40:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:32.446 18:40:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:32.446 18:40:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:32.446 18:40:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.446 18:40:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.446 18:40:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.706 18:40:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:32.706 "name": "Existed_Raid", 00:09:32.706 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:32.706 "strip_size_kb": 64, 00:09:32.706 "state": "configuring", 00:09:32.706 "raid_level": "concat", 00:09:32.706 "superblock": false, 00:09:32.706 "num_base_bdevs": 4, 00:09:32.706 "num_base_bdevs_discovered": 2, 00:09:32.706 "num_base_bdevs_operational": 4, 00:09:32.706 "base_bdevs_list": [ 00:09:32.706 { 00:09:32.706 "name": null, 00:09:32.706 "uuid": "2f8cc0e8-b23e-43e8-bf3f-b1c37e9eaa1c", 00:09:32.706 "is_configured": false, 00:09:32.706 "data_offset": 0, 00:09:32.706 "data_size": 65536 00:09:32.706 }, 00:09:32.706 { 00:09:32.706 "name": null, 00:09:32.706 "uuid": "2b869371-b1f3-4e80-8f1e-52f9076416a0", 00:09:32.706 "is_configured": false, 00:09:32.706 "data_offset": 0, 00:09:32.706 "data_size": 65536 00:09:32.706 }, 00:09:32.706 { 00:09:32.706 "name": "BaseBdev3", 00:09:32.706 "uuid": "0b4a8003-97f6-4a79-a1a3-3690463e54a3", 00:09:32.706 "is_configured": true, 00:09:32.706 "data_offset": 0, 00:09:32.706 "data_size": 65536 00:09:32.706 }, 00:09:32.706 { 00:09:32.706 "name": "BaseBdev4", 00:09:32.706 "uuid": "50e8a55c-f35c-43a1-bc66-aeec3e4d220b", 00:09:32.706 "is_configured": true, 00:09:32.706 "data_offset": 0, 00:09:32.706 "data_size": 65536 00:09:32.706 } 00:09:32.706 ] 00:09:32.706 }' 00:09:32.706 18:40:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:32.706 18:40:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.973 18:40:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:32.973 18:40:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:32.973 18:40:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.973 18:40:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.973 18:40:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.973 18:40:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:32.973 18:40:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:32.973 18:40:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.973 18:40:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.973 [2024-12-15 18:40:33.316299] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:32.973 18:40:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.973 18:40:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:32.973 18:40:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:32.973 18:40:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:32.973 18:40:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:32.973 18:40:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:32.973 18:40:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:32.973 18:40:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:32.973 18:40:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:32.973 18:40:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:32.973 18:40:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:32.973 18:40:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:32.973 18:40:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.973 18:40:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.973 18:40:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:32.973 18:40:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.973 18:40:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:32.973 "name": "Existed_Raid", 00:09:32.973 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:32.973 "strip_size_kb": 64, 00:09:32.973 "state": "configuring", 00:09:32.973 "raid_level": "concat", 00:09:32.973 "superblock": false, 00:09:32.973 "num_base_bdevs": 4, 00:09:32.973 "num_base_bdevs_discovered": 3, 00:09:32.973 "num_base_bdevs_operational": 4, 00:09:32.973 "base_bdevs_list": [ 00:09:32.973 { 00:09:32.973 "name": null, 00:09:32.973 "uuid": "2f8cc0e8-b23e-43e8-bf3f-b1c37e9eaa1c", 00:09:32.973 "is_configured": false, 00:09:32.973 "data_offset": 0, 00:09:32.973 "data_size": 65536 00:09:32.973 }, 00:09:32.973 { 00:09:32.973 "name": "BaseBdev2", 00:09:32.973 "uuid": "2b869371-b1f3-4e80-8f1e-52f9076416a0", 00:09:32.973 "is_configured": true, 00:09:32.973 "data_offset": 0, 00:09:32.973 "data_size": 65536 00:09:32.973 }, 00:09:32.973 { 00:09:32.973 "name": "BaseBdev3", 00:09:32.973 "uuid": "0b4a8003-97f6-4a79-a1a3-3690463e54a3", 00:09:32.973 "is_configured": true, 00:09:32.973 "data_offset": 0, 00:09:32.973 "data_size": 65536 00:09:32.973 }, 00:09:32.973 { 00:09:32.973 "name": "BaseBdev4", 00:09:32.973 "uuid": "50e8a55c-f35c-43a1-bc66-aeec3e4d220b", 00:09:32.973 "is_configured": true, 00:09:32.973 "data_offset": 0, 00:09:32.973 "data_size": 65536 00:09:32.973 } 00:09:32.973 ] 00:09:32.973 }' 00:09:32.973 18:40:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:32.973 18:40:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.558 18:40:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.558 18:40:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.558 18:40:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.558 18:40:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:33.558 18:40:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.558 18:40:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:33.558 18:40:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:33.558 18:40:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.558 18:40:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.558 18:40:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.558 18:40:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.558 18:40:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 2f8cc0e8-b23e-43e8-bf3f-b1c37e9eaa1c 00:09:33.558 18:40:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.558 18:40:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.558 [2024-12-15 18:40:33.842300] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:33.558 [2024-12-15 18:40:33.842413] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:09:33.558 [2024-12-15 18:40:33.842437] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:09:33.558 [2024-12-15 18:40:33.842742] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:33.558 [2024-12-15 18:40:33.842899] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:09:33.558 [2024-12-15 18:40:33.842941] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:09:33.558 [2024-12-15 18:40:33.843144] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:33.558 NewBaseBdev 00:09:33.558 18:40:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.558 18:40:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:33.558 18:40:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:09:33.558 18:40:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:33.558 18:40:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:33.558 18:40:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:33.558 18:40:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:33.558 18:40:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:33.558 18:40:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.558 18:40:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.558 18:40:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.558 18:40:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:33.558 18:40:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.558 18:40:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.558 [ 00:09:33.558 { 00:09:33.558 "name": "NewBaseBdev", 00:09:33.558 "aliases": [ 00:09:33.558 "2f8cc0e8-b23e-43e8-bf3f-b1c37e9eaa1c" 00:09:33.558 ], 00:09:33.558 "product_name": "Malloc disk", 00:09:33.558 "block_size": 512, 00:09:33.558 "num_blocks": 65536, 00:09:33.558 "uuid": "2f8cc0e8-b23e-43e8-bf3f-b1c37e9eaa1c", 00:09:33.558 "assigned_rate_limits": { 00:09:33.558 "rw_ios_per_sec": 0, 00:09:33.558 "rw_mbytes_per_sec": 0, 00:09:33.558 "r_mbytes_per_sec": 0, 00:09:33.558 "w_mbytes_per_sec": 0 00:09:33.558 }, 00:09:33.558 "claimed": true, 00:09:33.558 "claim_type": "exclusive_write", 00:09:33.558 "zoned": false, 00:09:33.558 "supported_io_types": { 00:09:33.558 "read": true, 00:09:33.558 "write": true, 00:09:33.558 "unmap": true, 00:09:33.558 "flush": true, 00:09:33.558 "reset": true, 00:09:33.558 "nvme_admin": false, 00:09:33.558 "nvme_io": false, 00:09:33.559 "nvme_io_md": false, 00:09:33.559 "write_zeroes": true, 00:09:33.559 "zcopy": true, 00:09:33.559 "get_zone_info": false, 00:09:33.559 "zone_management": false, 00:09:33.559 "zone_append": false, 00:09:33.559 "compare": false, 00:09:33.559 "compare_and_write": false, 00:09:33.559 "abort": true, 00:09:33.559 "seek_hole": false, 00:09:33.559 "seek_data": false, 00:09:33.559 "copy": true, 00:09:33.559 "nvme_iov_md": false 00:09:33.559 }, 00:09:33.559 "memory_domains": [ 00:09:33.559 { 00:09:33.559 "dma_device_id": "system", 00:09:33.559 "dma_device_type": 1 00:09:33.559 }, 00:09:33.559 { 00:09:33.559 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:33.559 "dma_device_type": 2 00:09:33.559 } 00:09:33.559 ], 00:09:33.559 "driver_specific": {} 00:09:33.559 } 00:09:33.559 ] 00:09:33.559 18:40:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.559 18:40:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:33.559 18:40:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:09:33.559 18:40:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:33.559 18:40:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:33.559 18:40:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:33.559 18:40:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:33.559 18:40:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:33.559 18:40:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:33.559 18:40:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:33.559 18:40:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:33.559 18:40:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:33.559 18:40:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.559 18:40:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:33.559 18:40:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.559 18:40:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.559 18:40:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.559 18:40:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:33.559 "name": "Existed_Raid", 00:09:33.559 "uuid": "a555d812-d3f2-46b6-98c3-3cb3e463e9f4", 00:09:33.559 "strip_size_kb": 64, 00:09:33.559 "state": "online", 00:09:33.559 "raid_level": "concat", 00:09:33.559 "superblock": false, 00:09:33.559 "num_base_bdevs": 4, 00:09:33.559 "num_base_bdevs_discovered": 4, 00:09:33.559 "num_base_bdevs_operational": 4, 00:09:33.559 "base_bdevs_list": [ 00:09:33.559 { 00:09:33.559 "name": "NewBaseBdev", 00:09:33.559 "uuid": "2f8cc0e8-b23e-43e8-bf3f-b1c37e9eaa1c", 00:09:33.559 "is_configured": true, 00:09:33.559 "data_offset": 0, 00:09:33.559 "data_size": 65536 00:09:33.559 }, 00:09:33.559 { 00:09:33.559 "name": "BaseBdev2", 00:09:33.559 "uuid": "2b869371-b1f3-4e80-8f1e-52f9076416a0", 00:09:33.559 "is_configured": true, 00:09:33.559 "data_offset": 0, 00:09:33.559 "data_size": 65536 00:09:33.559 }, 00:09:33.559 { 00:09:33.559 "name": "BaseBdev3", 00:09:33.559 "uuid": "0b4a8003-97f6-4a79-a1a3-3690463e54a3", 00:09:33.559 "is_configured": true, 00:09:33.559 "data_offset": 0, 00:09:33.559 "data_size": 65536 00:09:33.559 }, 00:09:33.559 { 00:09:33.559 "name": "BaseBdev4", 00:09:33.559 "uuid": "50e8a55c-f35c-43a1-bc66-aeec3e4d220b", 00:09:33.559 "is_configured": true, 00:09:33.559 "data_offset": 0, 00:09:33.559 "data_size": 65536 00:09:33.559 } 00:09:33.559 ] 00:09:33.559 }' 00:09:33.559 18:40:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:33.559 18:40:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.130 18:40:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:34.130 18:40:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:34.130 18:40:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:34.130 18:40:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:34.130 18:40:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:34.130 18:40:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:34.130 18:40:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:34.130 18:40:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:34.130 18:40:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.130 18:40:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.130 [2024-12-15 18:40:34.309890] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:34.130 18:40:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.130 18:40:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:34.130 "name": "Existed_Raid", 00:09:34.130 "aliases": [ 00:09:34.130 "a555d812-d3f2-46b6-98c3-3cb3e463e9f4" 00:09:34.130 ], 00:09:34.130 "product_name": "Raid Volume", 00:09:34.130 "block_size": 512, 00:09:34.130 "num_blocks": 262144, 00:09:34.130 "uuid": "a555d812-d3f2-46b6-98c3-3cb3e463e9f4", 00:09:34.130 "assigned_rate_limits": { 00:09:34.130 "rw_ios_per_sec": 0, 00:09:34.130 "rw_mbytes_per_sec": 0, 00:09:34.130 "r_mbytes_per_sec": 0, 00:09:34.130 "w_mbytes_per_sec": 0 00:09:34.130 }, 00:09:34.130 "claimed": false, 00:09:34.130 "zoned": false, 00:09:34.130 "supported_io_types": { 00:09:34.130 "read": true, 00:09:34.130 "write": true, 00:09:34.130 "unmap": true, 00:09:34.130 "flush": true, 00:09:34.130 "reset": true, 00:09:34.130 "nvme_admin": false, 00:09:34.130 "nvme_io": false, 00:09:34.130 "nvme_io_md": false, 00:09:34.130 "write_zeroes": true, 00:09:34.130 "zcopy": false, 00:09:34.130 "get_zone_info": false, 00:09:34.130 "zone_management": false, 00:09:34.130 "zone_append": false, 00:09:34.130 "compare": false, 00:09:34.130 "compare_and_write": false, 00:09:34.130 "abort": false, 00:09:34.130 "seek_hole": false, 00:09:34.130 "seek_data": false, 00:09:34.131 "copy": false, 00:09:34.131 "nvme_iov_md": false 00:09:34.131 }, 00:09:34.131 "memory_domains": [ 00:09:34.131 { 00:09:34.131 "dma_device_id": "system", 00:09:34.131 "dma_device_type": 1 00:09:34.131 }, 00:09:34.131 { 00:09:34.131 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:34.131 "dma_device_type": 2 00:09:34.131 }, 00:09:34.131 { 00:09:34.131 "dma_device_id": "system", 00:09:34.131 "dma_device_type": 1 00:09:34.131 }, 00:09:34.131 { 00:09:34.131 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:34.131 "dma_device_type": 2 00:09:34.131 }, 00:09:34.131 { 00:09:34.131 "dma_device_id": "system", 00:09:34.131 "dma_device_type": 1 00:09:34.131 }, 00:09:34.131 { 00:09:34.131 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:34.131 "dma_device_type": 2 00:09:34.131 }, 00:09:34.131 { 00:09:34.131 "dma_device_id": "system", 00:09:34.131 "dma_device_type": 1 00:09:34.131 }, 00:09:34.131 { 00:09:34.131 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:34.131 "dma_device_type": 2 00:09:34.131 } 00:09:34.131 ], 00:09:34.131 "driver_specific": { 00:09:34.131 "raid": { 00:09:34.131 "uuid": "a555d812-d3f2-46b6-98c3-3cb3e463e9f4", 00:09:34.131 "strip_size_kb": 64, 00:09:34.131 "state": "online", 00:09:34.131 "raid_level": "concat", 00:09:34.131 "superblock": false, 00:09:34.131 "num_base_bdevs": 4, 00:09:34.131 "num_base_bdevs_discovered": 4, 00:09:34.131 "num_base_bdevs_operational": 4, 00:09:34.131 "base_bdevs_list": [ 00:09:34.131 { 00:09:34.131 "name": "NewBaseBdev", 00:09:34.131 "uuid": "2f8cc0e8-b23e-43e8-bf3f-b1c37e9eaa1c", 00:09:34.131 "is_configured": true, 00:09:34.131 "data_offset": 0, 00:09:34.131 "data_size": 65536 00:09:34.131 }, 00:09:34.131 { 00:09:34.131 "name": "BaseBdev2", 00:09:34.131 "uuid": "2b869371-b1f3-4e80-8f1e-52f9076416a0", 00:09:34.131 "is_configured": true, 00:09:34.131 "data_offset": 0, 00:09:34.131 "data_size": 65536 00:09:34.131 }, 00:09:34.131 { 00:09:34.131 "name": "BaseBdev3", 00:09:34.131 "uuid": "0b4a8003-97f6-4a79-a1a3-3690463e54a3", 00:09:34.131 "is_configured": true, 00:09:34.131 "data_offset": 0, 00:09:34.131 "data_size": 65536 00:09:34.131 }, 00:09:34.131 { 00:09:34.131 "name": "BaseBdev4", 00:09:34.131 "uuid": "50e8a55c-f35c-43a1-bc66-aeec3e4d220b", 00:09:34.131 "is_configured": true, 00:09:34.131 "data_offset": 0, 00:09:34.131 "data_size": 65536 00:09:34.131 } 00:09:34.131 ] 00:09:34.131 } 00:09:34.131 } 00:09:34.131 }' 00:09:34.131 18:40:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:34.131 18:40:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:34.131 BaseBdev2 00:09:34.131 BaseBdev3 00:09:34.131 BaseBdev4' 00:09:34.131 18:40:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:34.131 18:40:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:34.131 18:40:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:34.131 18:40:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:34.131 18:40:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:34.131 18:40:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.131 18:40:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.131 18:40:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.131 18:40:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:34.131 18:40:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:34.131 18:40:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:34.131 18:40:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:34.131 18:40:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.131 18:40:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.131 18:40:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:34.131 18:40:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.131 18:40:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:34.131 18:40:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:34.131 18:40:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:34.131 18:40:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:34.131 18:40:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:34.131 18:40:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.131 18:40:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.131 18:40:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.131 18:40:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:34.131 18:40:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:34.131 18:40:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:34.392 18:40:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:09:34.392 18:40:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.392 18:40:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.392 18:40:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:34.392 18:40:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.392 18:40:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:34.392 18:40:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:34.392 18:40:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:34.392 18:40:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.392 18:40:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.392 [2024-12-15 18:40:34.624991] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:34.392 [2024-12-15 18:40:34.625062] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:34.392 [2024-12-15 18:40:34.625158] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:34.392 [2024-12-15 18:40:34.625244] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:34.392 [2024-12-15 18:40:34.625278] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:09:34.392 18:40:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.392 18:40:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 84089 00:09:34.392 18:40:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 84089 ']' 00:09:34.392 18:40:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 84089 00:09:34.392 18:40:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:09:34.392 18:40:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:34.392 18:40:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84089 00:09:34.392 killing process with pid 84089 00:09:34.392 18:40:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:34.392 18:40:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:34.392 18:40:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84089' 00:09:34.392 18:40:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 84089 00:09:34.392 [2024-12-15 18:40:34.663208] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:34.392 18:40:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 84089 00:09:34.392 [2024-12-15 18:40:34.705228] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:34.653 18:40:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:34.653 00:09:34.653 real 0m9.588s 00:09:34.653 user 0m16.339s 00:09:34.653 sys 0m2.075s 00:09:34.653 ************************************ 00:09:34.653 END TEST raid_state_function_test 00:09:34.653 ************************************ 00:09:34.653 18:40:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:34.653 18:40:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.653 18:40:34 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:09:34.653 18:40:34 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:34.653 18:40:34 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:34.653 18:40:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:34.653 ************************************ 00:09:34.653 START TEST raid_state_function_test_sb 00:09:34.653 ************************************ 00:09:34.653 18:40:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 true 00:09:34.653 18:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:09:34.653 18:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:09:34.653 18:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:34.653 18:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:34.653 18:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:34.653 18:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:34.653 18:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:34.653 18:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:34.653 18:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:34.653 18:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:34.653 18:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:34.653 18:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:34.653 18:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:34.653 18:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:34.653 18:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:34.653 18:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:09:34.653 18:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:34.653 18:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:34.653 18:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:09:34.653 18:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:34.653 18:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:34.653 18:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:34.653 18:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:34.653 18:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:34.653 18:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:09:34.653 18:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:34.653 18:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:34.653 18:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:34.653 18:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:34.653 18:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=84738 00:09:34.653 18:40:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:34.653 18:40:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 84738' 00:09:34.653 Process raid pid: 84738 00:09:34.653 18:40:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 84738 00:09:34.653 18:40:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 84738 ']' 00:09:34.653 18:40:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:34.653 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:34.653 18:40:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:34.653 18:40:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:34.653 18:40:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:34.653 18:40:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.653 [2024-12-15 18:40:35.087668] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:09:34.653 [2024-12-15 18:40:35.087886] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:34.920 [2024-12-15 18:40:35.267491] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:34.920 [2024-12-15 18:40:35.292884] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:34.920 [2024-12-15 18:40:35.335573] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:34.920 [2024-12-15 18:40:35.335705] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:35.865 18:40:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:35.865 18:40:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:09:35.865 18:40:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:35.865 18:40:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.865 18:40:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.865 [2024-12-15 18:40:35.962574] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:35.865 [2024-12-15 18:40:35.962678] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:35.865 [2024-12-15 18:40:35.962711] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:35.866 [2024-12-15 18:40:35.962737] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:35.866 [2024-12-15 18:40:35.962755] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:35.866 [2024-12-15 18:40:35.962780] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:35.866 [2024-12-15 18:40:35.962808] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:35.866 [2024-12-15 18:40:35.962832] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:35.866 18:40:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.866 18:40:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:35.866 18:40:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:35.866 18:40:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:35.866 18:40:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:35.866 18:40:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:35.866 18:40:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:35.866 18:40:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:35.866 18:40:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:35.866 18:40:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:35.866 18:40:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:35.866 18:40:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.866 18:40:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:35.866 18:40:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.866 18:40:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.866 18:40:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.866 18:40:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:35.866 "name": "Existed_Raid", 00:09:35.866 "uuid": "9cbbe894-edca-445a-a222-8d53d4465ab7", 00:09:35.866 "strip_size_kb": 64, 00:09:35.866 "state": "configuring", 00:09:35.866 "raid_level": "concat", 00:09:35.866 "superblock": true, 00:09:35.866 "num_base_bdevs": 4, 00:09:35.866 "num_base_bdevs_discovered": 0, 00:09:35.866 "num_base_bdevs_operational": 4, 00:09:35.866 "base_bdevs_list": [ 00:09:35.866 { 00:09:35.866 "name": "BaseBdev1", 00:09:35.866 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:35.866 "is_configured": false, 00:09:35.866 "data_offset": 0, 00:09:35.866 "data_size": 0 00:09:35.866 }, 00:09:35.866 { 00:09:35.866 "name": "BaseBdev2", 00:09:35.866 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:35.866 "is_configured": false, 00:09:35.866 "data_offset": 0, 00:09:35.866 "data_size": 0 00:09:35.866 }, 00:09:35.866 { 00:09:35.866 "name": "BaseBdev3", 00:09:35.866 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:35.866 "is_configured": false, 00:09:35.866 "data_offset": 0, 00:09:35.866 "data_size": 0 00:09:35.866 }, 00:09:35.866 { 00:09:35.866 "name": "BaseBdev4", 00:09:35.866 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:35.866 "is_configured": false, 00:09:35.866 "data_offset": 0, 00:09:35.866 "data_size": 0 00:09:35.866 } 00:09:35.866 ] 00:09:35.866 }' 00:09:35.866 18:40:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:35.866 18:40:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.127 18:40:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:36.127 18:40:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.127 18:40:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.127 [2024-12-15 18:40:36.389735] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:36.127 [2024-12-15 18:40:36.389852] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:09:36.127 18:40:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.127 18:40:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:36.127 18:40:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.127 18:40:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.127 [2024-12-15 18:40:36.401720] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:36.127 [2024-12-15 18:40:36.401797] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:36.127 [2024-12-15 18:40:36.401835] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:36.127 [2024-12-15 18:40:36.401859] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:36.127 [2024-12-15 18:40:36.401878] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:36.127 [2024-12-15 18:40:36.401900] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:36.127 [2024-12-15 18:40:36.401918] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:36.127 [2024-12-15 18:40:36.401940] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:36.127 18:40:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.127 18:40:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:36.127 18:40:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.127 18:40:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.127 [2024-12-15 18:40:36.422592] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:36.127 BaseBdev1 00:09:36.127 18:40:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.127 18:40:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:36.128 18:40:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:36.128 18:40:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:36.128 18:40:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:36.128 18:40:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:36.128 18:40:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:36.128 18:40:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:36.128 18:40:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.128 18:40:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.128 18:40:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.128 18:40:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:36.128 18:40:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.128 18:40:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.128 [ 00:09:36.128 { 00:09:36.128 "name": "BaseBdev1", 00:09:36.128 "aliases": [ 00:09:36.128 "175995a4-dbb7-4df2-8ad3-41ceaf75c4f6" 00:09:36.128 ], 00:09:36.128 "product_name": "Malloc disk", 00:09:36.128 "block_size": 512, 00:09:36.128 "num_blocks": 65536, 00:09:36.128 "uuid": "175995a4-dbb7-4df2-8ad3-41ceaf75c4f6", 00:09:36.128 "assigned_rate_limits": { 00:09:36.128 "rw_ios_per_sec": 0, 00:09:36.128 "rw_mbytes_per_sec": 0, 00:09:36.128 "r_mbytes_per_sec": 0, 00:09:36.128 "w_mbytes_per_sec": 0 00:09:36.128 }, 00:09:36.128 "claimed": true, 00:09:36.128 "claim_type": "exclusive_write", 00:09:36.128 "zoned": false, 00:09:36.128 "supported_io_types": { 00:09:36.128 "read": true, 00:09:36.128 "write": true, 00:09:36.128 "unmap": true, 00:09:36.128 "flush": true, 00:09:36.128 "reset": true, 00:09:36.128 "nvme_admin": false, 00:09:36.128 "nvme_io": false, 00:09:36.128 "nvme_io_md": false, 00:09:36.128 "write_zeroes": true, 00:09:36.128 "zcopy": true, 00:09:36.128 "get_zone_info": false, 00:09:36.128 "zone_management": false, 00:09:36.128 "zone_append": false, 00:09:36.128 "compare": false, 00:09:36.128 "compare_and_write": false, 00:09:36.128 "abort": true, 00:09:36.128 "seek_hole": false, 00:09:36.128 "seek_data": false, 00:09:36.128 "copy": true, 00:09:36.128 "nvme_iov_md": false 00:09:36.128 }, 00:09:36.128 "memory_domains": [ 00:09:36.128 { 00:09:36.128 "dma_device_id": "system", 00:09:36.128 "dma_device_type": 1 00:09:36.128 }, 00:09:36.128 { 00:09:36.128 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:36.128 "dma_device_type": 2 00:09:36.128 } 00:09:36.128 ], 00:09:36.128 "driver_specific": {} 00:09:36.128 } 00:09:36.128 ] 00:09:36.128 18:40:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.128 18:40:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:36.128 18:40:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:36.128 18:40:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:36.128 18:40:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:36.128 18:40:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:36.128 18:40:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:36.128 18:40:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:36.128 18:40:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:36.128 18:40:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:36.128 18:40:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:36.128 18:40:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:36.128 18:40:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.128 18:40:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:36.128 18:40:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.128 18:40:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.128 18:40:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.128 18:40:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:36.128 "name": "Existed_Raid", 00:09:36.128 "uuid": "280c0eba-c8ff-4763-b60e-d0ef28261b7a", 00:09:36.128 "strip_size_kb": 64, 00:09:36.128 "state": "configuring", 00:09:36.128 "raid_level": "concat", 00:09:36.128 "superblock": true, 00:09:36.128 "num_base_bdevs": 4, 00:09:36.128 "num_base_bdevs_discovered": 1, 00:09:36.128 "num_base_bdevs_operational": 4, 00:09:36.128 "base_bdevs_list": [ 00:09:36.128 { 00:09:36.128 "name": "BaseBdev1", 00:09:36.128 "uuid": "175995a4-dbb7-4df2-8ad3-41ceaf75c4f6", 00:09:36.128 "is_configured": true, 00:09:36.128 "data_offset": 2048, 00:09:36.128 "data_size": 63488 00:09:36.128 }, 00:09:36.128 { 00:09:36.128 "name": "BaseBdev2", 00:09:36.128 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:36.128 "is_configured": false, 00:09:36.128 "data_offset": 0, 00:09:36.128 "data_size": 0 00:09:36.128 }, 00:09:36.128 { 00:09:36.128 "name": "BaseBdev3", 00:09:36.128 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:36.128 "is_configured": false, 00:09:36.128 "data_offset": 0, 00:09:36.128 "data_size": 0 00:09:36.128 }, 00:09:36.128 { 00:09:36.128 "name": "BaseBdev4", 00:09:36.128 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:36.128 "is_configured": false, 00:09:36.128 "data_offset": 0, 00:09:36.128 "data_size": 0 00:09:36.128 } 00:09:36.128 ] 00:09:36.128 }' 00:09:36.128 18:40:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:36.128 18:40:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.699 18:40:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:36.699 18:40:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.699 18:40:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.699 [2024-12-15 18:40:36.909829] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:36.699 [2024-12-15 18:40:36.909931] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:09:36.699 18:40:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.699 18:40:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:36.699 18:40:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.699 18:40:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.699 [2024-12-15 18:40:36.921848] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:36.699 [2024-12-15 18:40:36.923693] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:36.699 [2024-12-15 18:40:36.923771] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:36.699 [2024-12-15 18:40:36.923808] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:36.699 [2024-12-15 18:40:36.923832] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:36.699 [2024-12-15 18:40:36.923850] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:36.699 [2024-12-15 18:40:36.923870] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:36.699 18:40:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.699 18:40:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:36.699 18:40:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:36.699 18:40:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:36.699 18:40:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:36.699 18:40:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:36.699 18:40:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:36.699 18:40:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:36.699 18:40:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:36.699 18:40:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:36.699 18:40:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:36.699 18:40:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:36.699 18:40:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:36.699 18:40:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:36.699 18:40:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.699 18:40:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.699 18:40:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.699 18:40:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.699 18:40:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:36.699 "name": "Existed_Raid", 00:09:36.699 "uuid": "4de0053f-2ead-4f89-aecf-59338d8eddd2", 00:09:36.699 "strip_size_kb": 64, 00:09:36.699 "state": "configuring", 00:09:36.699 "raid_level": "concat", 00:09:36.699 "superblock": true, 00:09:36.699 "num_base_bdevs": 4, 00:09:36.699 "num_base_bdevs_discovered": 1, 00:09:36.699 "num_base_bdevs_operational": 4, 00:09:36.699 "base_bdevs_list": [ 00:09:36.699 { 00:09:36.699 "name": "BaseBdev1", 00:09:36.699 "uuid": "175995a4-dbb7-4df2-8ad3-41ceaf75c4f6", 00:09:36.699 "is_configured": true, 00:09:36.699 "data_offset": 2048, 00:09:36.699 "data_size": 63488 00:09:36.699 }, 00:09:36.699 { 00:09:36.699 "name": "BaseBdev2", 00:09:36.699 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:36.699 "is_configured": false, 00:09:36.699 "data_offset": 0, 00:09:36.699 "data_size": 0 00:09:36.699 }, 00:09:36.699 { 00:09:36.699 "name": "BaseBdev3", 00:09:36.699 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:36.699 "is_configured": false, 00:09:36.699 "data_offset": 0, 00:09:36.699 "data_size": 0 00:09:36.699 }, 00:09:36.699 { 00:09:36.699 "name": "BaseBdev4", 00:09:36.699 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:36.699 "is_configured": false, 00:09:36.699 "data_offset": 0, 00:09:36.699 "data_size": 0 00:09:36.699 } 00:09:36.699 ] 00:09:36.699 }' 00:09:36.699 18:40:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:36.699 18:40:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.959 18:40:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:36.960 18:40:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.960 18:40:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.960 [2024-12-15 18:40:37.344053] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:36.960 BaseBdev2 00:09:36.960 18:40:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.960 18:40:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:36.960 18:40:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:36.960 18:40:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:36.960 18:40:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:36.960 18:40:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:36.960 18:40:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:36.960 18:40:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:36.960 18:40:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.960 18:40:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.960 18:40:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.960 18:40:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:36.960 18:40:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.960 18:40:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.960 [ 00:09:36.960 { 00:09:36.960 "name": "BaseBdev2", 00:09:36.960 "aliases": [ 00:09:36.960 "0e4dc133-de41-4837-a2e2-3a77a4059e8b" 00:09:36.960 ], 00:09:36.960 "product_name": "Malloc disk", 00:09:36.960 "block_size": 512, 00:09:36.960 "num_blocks": 65536, 00:09:36.960 "uuid": "0e4dc133-de41-4837-a2e2-3a77a4059e8b", 00:09:36.960 "assigned_rate_limits": { 00:09:36.960 "rw_ios_per_sec": 0, 00:09:36.960 "rw_mbytes_per_sec": 0, 00:09:36.960 "r_mbytes_per_sec": 0, 00:09:36.960 "w_mbytes_per_sec": 0 00:09:36.960 }, 00:09:36.960 "claimed": true, 00:09:36.960 "claim_type": "exclusive_write", 00:09:36.960 "zoned": false, 00:09:36.960 "supported_io_types": { 00:09:36.960 "read": true, 00:09:36.960 "write": true, 00:09:36.960 "unmap": true, 00:09:36.960 "flush": true, 00:09:36.960 "reset": true, 00:09:36.960 "nvme_admin": false, 00:09:36.960 "nvme_io": false, 00:09:36.960 "nvme_io_md": false, 00:09:36.960 "write_zeroes": true, 00:09:36.960 "zcopy": true, 00:09:36.960 "get_zone_info": false, 00:09:36.960 "zone_management": false, 00:09:36.960 "zone_append": false, 00:09:36.960 "compare": false, 00:09:36.960 "compare_and_write": false, 00:09:36.960 "abort": true, 00:09:36.960 "seek_hole": false, 00:09:36.960 "seek_data": false, 00:09:36.960 "copy": true, 00:09:36.960 "nvme_iov_md": false 00:09:36.960 }, 00:09:36.960 "memory_domains": [ 00:09:36.960 { 00:09:36.960 "dma_device_id": "system", 00:09:36.960 "dma_device_type": 1 00:09:36.960 }, 00:09:36.960 { 00:09:36.960 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:36.960 "dma_device_type": 2 00:09:36.960 } 00:09:36.960 ], 00:09:36.960 "driver_specific": {} 00:09:36.960 } 00:09:36.960 ] 00:09:36.960 18:40:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.960 18:40:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:36.960 18:40:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:36.960 18:40:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:36.960 18:40:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:36.960 18:40:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:36.960 18:40:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:36.960 18:40:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:36.960 18:40:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:36.960 18:40:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:36.960 18:40:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:36.960 18:40:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:36.960 18:40:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:36.960 18:40:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:36.960 18:40:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.960 18:40:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:36.960 18:40:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.960 18:40:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.221 18:40:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.221 18:40:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:37.221 "name": "Existed_Raid", 00:09:37.221 "uuid": "4de0053f-2ead-4f89-aecf-59338d8eddd2", 00:09:37.221 "strip_size_kb": 64, 00:09:37.221 "state": "configuring", 00:09:37.221 "raid_level": "concat", 00:09:37.221 "superblock": true, 00:09:37.221 "num_base_bdevs": 4, 00:09:37.221 "num_base_bdevs_discovered": 2, 00:09:37.221 "num_base_bdevs_operational": 4, 00:09:37.221 "base_bdevs_list": [ 00:09:37.221 { 00:09:37.221 "name": "BaseBdev1", 00:09:37.221 "uuid": "175995a4-dbb7-4df2-8ad3-41ceaf75c4f6", 00:09:37.221 "is_configured": true, 00:09:37.221 "data_offset": 2048, 00:09:37.221 "data_size": 63488 00:09:37.221 }, 00:09:37.221 { 00:09:37.221 "name": "BaseBdev2", 00:09:37.221 "uuid": "0e4dc133-de41-4837-a2e2-3a77a4059e8b", 00:09:37.221 "is_configured": true, 00:09:37.221 "data_offset": 2048, 00:09:37.221 "data_size": 63488 00:09:37.221 }, 00:09:37.221 { 00:09:37.221 "name": "BaseBdev3", 00:09:37.221 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:37.221 "is_configured": false, 00:09:37.221 "data_offset": 0, 00:09:37.221 "data_size": 0 00:09:37.221 }, 00:09:37.221 { 00:09:37.221 "name": "BaseBdev4", 00:09:37.221 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:37.221 "is_configured": false, 00:09:37.221 "data_offset": 0, 00:09:37.221 "data_size": 0 00:09:37.221 } 00:09:37.221 ] 00:09:37.221 }' 00:09:37.221 18:40:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:37.221 18:40:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.481 18:40:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:37.481 18:40:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.481 18:40:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.481 [2024-12-15 18:40:37.850703] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:37.481 BaseBdev3 00:09:37.481 18:40:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.481 18:40:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:37.481 18:40:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:37.481 18:40:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:37.481 18:40:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:37.481 18:40:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:37.481 18:40:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:37.482 18:40:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:37.482 18:40:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.482 18:40:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.482 18:40:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.482 18:40:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:37.482 18:40:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.482 18:40:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.482 [ 00:09:37.482 { 00:09:37.482 "name": "BaseBdev3", 00:09:37.482 "aliases": [ 00:09:37.482 "f9cd6ca4-3107-4e1e-9c1a-6392e254c2c0" 00:09:37.482 ], 00:09:37.482 "product_name": "Malloc disk", 00:09:37.482 "block_size": 512, 00:09:37.482 "num_blocks": 65536, 00:09:37.482 "uuid": "f9cd6ca4-3107-4e1e-9c1a-6392e254c2c0", 00:09:37.482 "assigned_rate_limits": { 00:09:37.482 "rw_ios_per_sec": 0, 00:09:37.482 "rw_mbytes_per_sec": 0, 00:09:37.482 "r_mbytes_per_sec": 0, 00:09:37.482 "w_mbytes_per_sec": 0 00:09:37.482 }, 00:09:37.482 "claimed": true, 00:09:37.482 "claim_type": "exclusive_write", 00:09:37.482 "zoned": false, 00:09:37.482 "supported_io_types": { 00:09:37.482 "read": true, 00:09:37.482 "write": true, 00:09:37.482 "unmap": true, 00:09:37.482 "flush": true, 00:09:37.482 "reset": true, 00:09:37.482 "nvme_admin": false, 00:09:37.482 "nvme_io": false, 00:09:37.482 "nvme_io_md": false, 00:09:37.482 "write_zeroes": true, 00:09:37.482 "zcopy": true, 00:09:37.482 "get_zone_info": false, 00:09:37.482 "zone_management": false, 00:09:37.482 "zone_append": false, 00:09:37.482 "compare": false, 00:09:37.482 "compare_and_write": false, 00:09:37.482 "abort": true, 00:09:37.482 "seek_hole": false, 00:09:37.482 "seek_data": false, 00:09:37.482 "copy": true, 00:09:37.482 "nvme_iov_md": false 00:09:37.482 }, 00:09:37.482 "memory_domains": [ 00:09:37.482 { 00:09:37.482 "dma_device_id": "system", 00:09:37.482 "dma_device_type": 1 00:09:37.482 }, 00:09:37.482 { 00:09:37.482 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:37.482 "dma_device_type": 2 00:09:37.482 } 00:09:37.482 ], 00:09:37.482 "driver_specific": {} 00:09:37.482 } 00:09:37.482 ] 00:09:37.482 18:40:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.482 18:40:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:37.482 18:40:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:37.482 18:40:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:37.482 18:40:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:37.482 18:40:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:37.482 18:40:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:37.482 18:40:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:37.482 18:40:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:37.482 18:40:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:37.482 18:40:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:37.482 18:40:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:37.482 18:40:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:37.482 18:40:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:37.482 18:40:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.482 18:40:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:37.482 18:40:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.482 18:40:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.482 18:40:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.743 18:40:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:37.743 "name": "Existed_Raid", 00:09:37.743 "uuid": "4de0053f-2ead-4f89-aecf-59338d8eddd2", 00:09:37.743 "strip_size_kb": 64, 00:09:37.743 "state": "configuring", 00:09:37.743 "raid_level": "concat", 00:09:37.743 "superblock": true, 00:09:37.743 "num_base_bdevs": 4, 00:09:37.743 "num_base_bdevs_discovered": 3, 00:09:37.743 "num_base_bdevs_operational": 4, 00:09:37.743 "base_bdevs_list": [ 00:09:37.743 { 00:09:37.743 "name": "BaseBdev1", 00:09:37.743 "uuid": "175995a4-dbb7-4df2-8ad3-41ceaf75c4f6", 00:09:37.743 "is_configured": true, 00:09:37.743 "data_offset": 2048, 00:09:37.743 "data_size": 63488 00:09:37.743 }, 00:09:37.743 { 00:09:37.743 "name": "BaseBdev2", 00:09:37.743 "uuid": "0e4dc133-de41-4837-a2e2-3a77a4059e8b", 00:09:37.743 "is_configured": true, 00:09:37.743 "data_offset": 2048, 00:09:37.743 "data_size": 63488 00:09:37.743 }, 00:09:37.743 { 00:09:37.743 "name": "BaseBdev3", 00:09:37.743 "uuid": "f9cd6ca4-3107-4e1e-9c1a-6392e254c2c0", 00:09:37.743 "is_configured": true, 00:09:37.743 "data_offset": 2048, 00:09:37.743 "data_size": 63488 00:09:37.743 }, 00:09:37.743 { 00:09:37.743 "name": "BaseBdev4", 00:09:37.743 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:37.743 "is_configured": false, 00:09:37.743 "data_offset": 0, 00:09:37.743 "data_size": 0 00:09:37.743 } 00:09:37.743 ] 00:09:37.743 }' 00:09:37.743 18:40:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:37.743 18:40:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.003 18:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:09:38.003 18:40:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.003 18:40:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.003 [2024-12-15 18:40:38.333077] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:38.004 [2024-12-15 18:40:38.333390] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:09:38.004 [2024-12-15 18:40:38.333447] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:09:38.004 BaseBdev4 00:09:38.004 [2024-12-15 18:40:38.333753] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:38.004 [2024-12-15 18:40:38.333929] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:09:38.004 [2024-12-15 18:40:38.333948] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:09:38.004 [2024-12-15 18:40:38.334067] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:38.004 18:40:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.004 18:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:09:38.004 18:40:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:09:38.004 18:40:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:38.004 18:40:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:38.004 18:40:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:38.004 18:40:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:38.004 18:40:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:38.004 18:40:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.004 18:40:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.004 18:40:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.004 18:40:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:09:38.004 18:40:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.004 18:40:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.004 [ 00:09:38.004 { 00:09:38.004 "name": "BaseBdev4", 00:09:38.004 "aliases": [ 00:09:38.004 "9102896a-0a3a-4b11-b66f-e867d97f8aca" 00:09:38.004 ], 00:09:38.004 "product_name": "Malloc disk", 00:09:38.004 "block_size": 512, 00:09:38.004 "num_blocks": 65536, 00:09:38.004 "uuid": "9102896a-0a3a-4b11-b66f-e867d97f8aca", 00:09:38.004 "assigned_rate_limits": { 00:09:38.004 "rw_ios_per_sec": 0, 00:09:38.004 "rw_mbytes_per_sec": 0, 00:09:38.004 "r_mbytes_per_sec": 0, 00:09:38.004 "w_mbytes_per_sec": 0 00:09:38.004 }, 00:09:38.004 "claimed": true, 00:09:38.004 "claim_type": "exclusive_write", 00:09:38.004 "zoned": false, 00:09:38.004 "supported_io_types": { 00:09:38.004 "read": true, 00:09:38.004 "write": true, 00:09:38.004 "unmap": true, 00:09:38.004 "flush": true, 00:09:38.004 "reset": true, 00:09:38.004 "nvme_admin": false, 00:09:38.004 "nvme_io": false, 00:09:38.004 "nvme_io_md": false, 00:09:38.004 "write_zeroes": true, 00:09:38.004 "zcopy": true, 00:09:38.004 "get_zone_info": false, 00:09:38.004 "zone_management": false, 00:09:38.004 "zone_append": false, 00:09:38.004 "compare": false, 00:09:38.004 "compare_and_write": false, 00:09:38.004 "abort": true, 00:09:38.004 "seek_hole": false, 00:09:38.004 "seek_data": false, 00:09:38.004 "copy": true, 00:09:38.004 "nvme_iov_md": false 00:09:38.004 }, 00:09:38.004 "memory_domains": [ 00:09:38.004 { 00:09:38.004 "dma_device_id": "system", 00:09:38.004 "dma_device_type": 1 00:09:38.004 }, 00:09:38.004 { 00:09:38.004 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:38.004 "dma_device_type": 2 00:09:38.004 } 00:09:38.004 ], 00:09:38.004 "driver_specific": {} 00:09:38.004 } 00:09:38.004 ] 00:09:38.004 18:40:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.004 18:40:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:38.004 18:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:38.004 18:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:38.004 18:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:09:38.004 18:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:38.004 18:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:38.004 18:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:38.004 18:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:38.004 18:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:38.004 18:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:38.004 18:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:38.004 18:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:38.004 18:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:38.004 18:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:38.004 18:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.004 18:40:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.004 18:40:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.004 18:40:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.004 18:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:38.004 "name": "Existed_Raid", 00:09:38.004 "uuid": "4de0053f-2ead-4f89-aecf-59338d8eddd2", 00:09:38.004 "strip_size_kb": 64, 00:09:38.004 "state": "online", 00:09:38.004 "raid_level": "concat", 00:09:38.004 "superblock": true, 00:09:38.004 "num_base_bdevs": 4, 00:09:38.004 "num_base_bdevs_discovered": 4, 00:09:38.004 "num_base_bdevs_operational": 4, 00:09:38.004 "base_bdevs_list": [ 00:09:38.004 { 00:09:38.004 "name": "BaseBdev1", 00:09:38.004 "uuid": "175995a4-dbb7-4df2-8ad3-41ceaf75c4f6", 00:09:38.004 "is_configured": true, 00:09:38.004 "data_offset": 2048, 00:09:38.004 "data_size": 63488 00:09:38.004 }, 00:09:38.004 { 00:09:38.004 "name": "BaseBdev2", 00:09:38.004 "uuid": "0e4dc133-de41-4837-a2e2-3a77a4059e8b", 00:09:38.004 "is_configured": true, 00:09:38.004 "data_offset": 2048, 00:09:38.004 "data_size": 63488 00:09:38.004 }, 00:09:38.004 { 00:09:38.004 "name": "BaseBdev3", 00:09:38.004 "uuid": "f9cd6ca4-3107-4e1e-9c1a-6392e254c2c0", 00:09:38.004 "is_configured": true, 00:09:38.004 "data_offset": 2048, 00:09:38.004 "data_size": 63488 00:09:38.004 }, 00:09:38.004 { 00:09:38.004 "name": "BaseBdev4", 00:09:38.004 "uuid": "9102896a-0a3a-4b11-b66f-e867d97f8aca", 00:09:38.004 "is_configured": true, 00:09:38.004 "data_offset": 2048, 00:09:38.004 "data_size": 63488 00:09:38.004 } 00:09:38.004 ] 00:09:38.004 }' 00:09:38.004 18:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:38.004 18:40:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.575 18:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:38.575 18:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:38.575 18:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:38.575 18:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:38.575 18:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:38.575 18:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:38.575 18:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:38.575 18:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:38.575 18:40:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.575 18:40:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.575 [2024-12-15 18:40:38.804697] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:38.575 18:40:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.575 18:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:38.575 "name": "Existed_Raid", 00:09:38.575 "aliases": [ 00:09:38.575 "4de0053f-2ead-4f89-aecf-59338d8eddd2" 00:09:38.575 ], 00:09:38.575 "product_name": "Raid Volume", 00:09:38.575 "block_size": 512, 00:09:38.575 "num_blocks": 253952, 00:09:38.575 "uuid": "4de0053f-2ead-4f89-aecf-59338d8eddd2", 00:09:38.575 "assigned_rate_limits": { 00:09:38.575 "rw_ios_per_sec": 0, 00:09:38.575 "rw_mbytes_per_sec": 0, 00:09:38.575 "r_mbytes_per_sec": 0, 00:09:38.575 "w_mbytes_per_sec": 0 00:09:38.575 }, 00:09:38.575 "claimed": false, 00:09:38.575 "zoned": false, 00:09:38.575 "supported_io_types": { 00:09:38.575 "read": true, 00:09:38.575 "write": true, 00:09:38.575 "unmap": true, 00:09:38.575 "flush": true, 00:09:38.575 "reset": true, 00:09:38.575 "nvme_admin": false, 00:09:38.575 "nvme_io": false, 00:09:38.575 "nvme_io_md": false, 00:09:38.575 "write_zeroes": true, 00:09:38.575 "zcopy": false, 00:09:38.575 "get_zone_info": false, 00:09:38.575 "zone_management": false, 00:09:38.575 "zone_append": false, 00:09:38.575 "compare": false, 00:09:38.575 "compare_and_write": false, 00:09:38.575 "abort": false, 00:09:38.575 "seek_hole": false, 00:09:38.575 "seek_data": false, 00:09:38.575 "copy": false, 00:09:38.575 "nvme_iov_md": false 00:09:38.575 }, 00:09:38.575 "memory_domains": [ 00:09:38.575 { 00:09:38.575 "dma_device_id": "system", 00:09:38.575 "dma_device_type": 1 00:09:38.575 }, 00:09:38.575 { 00:09:38.575 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:38.575 "dma_device_type": 2 00:09:38.575 }, 00:09:38.575 { 00:09:38.575 "dma_device_id": "system", 00:09:38.575 "dma_device_type": 1 00:09:38.575 }, 00:09:38.575 { 00:09:38.576 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:38.576 "dma_device_type": 2 00:09:38.576 }, 00:09:38.576 { 00:09:38.576 "dma_device_id": "system", 00:09:38.576 "dma_device_type": 1 00:09:38.576 }, 00:09:38.576 { 00:09:38.576 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:38.576 "dma_device_type": 2 00:09:38.576 }, 00:09:38.576 { 00:09:38.576 "dma_device_id": "system", 00:09:38.576 "dma_device_type": 1 00:09:38.576 }, 00:09:38.576 { 00:09:38.576 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:38.576 "dma_device_type": 2 00:09:38.576 } 00:09:38.576 ], 00:09:38.576 "driver_specific": { 00:09:38.576 "raid": { 00:09:38.576 "uuid": "4de0053f-2ead-4f89-aecf-59338d8eddd2", 00:09:38.576 "strip_size_kb": 64, 00:09:38.576 "state": "online", 00:09:38.576 "raid_level": "concat", 00:09:38.576 "superblock": true, 00:09:38.576 "num_base_bdevs": 4, 00:09:38.576 "num_base_bdevs_discovered": 4, 00:09:38.576 "num_base_bdevs_operational": 4, 00:09:38.576 "base_bdevs_list": [ 00:09:38.576 { 00:09:38.576 "name": "BaseBdev1", 00:09:38.576 "uuid": "175995a4-dbb7-4df2-8ad3-41ceaf75c4f6", 00:09:38.576 "is_configured": true, 00:09:38.576 "data_offset": 2048, 00:09:38.576 "data_size": 63488 00:09:38.576 }, 00:09:38.576 { 00:09:38.576 "name": "BaseBdev2", 00:09:38.576 "uuid": "0e4dc133-de41-4837-a2e2-3a77a4059e8b", 00:09:38.576 "is_configured": true, 00:09:38.576 "data_offset": 2048, 00:09:38.576 "data_size": 63488 00:09:38.576 }, 00:09:38.576 { 00:09:38.576 "name": "BaseBdev3", 00:09:38.576 "uuid": "f9cd6ca4-3107-4e1e-9c1a-6392e254c2c0", 00:09:38.576 "is_configured": true, 00:09:38.576 "data_offset": 2048, 00:09:38.576 "data_size": 63488 00:09:38.576 }, 00:09:38.576 { 00:09:38.576 "name": "BaseBdev4", 00:09:38.576 "uuid": "9102896a-0a3a-4b11-b66f-e867d97f8aca", 00:09:38.576 "is_configured": true, 00:09:38.576 "data_offset": 2048, 00:09:38.576 "data_size": 63488 00:09:38.576 } 00:09:38.576 ] 00:09:38.576 } 00:09:38.576 } 00:09:38.576 }' 00:09:38.576 18:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:38.576 18:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:38.576 BaseBdev2 00:09:38.576 BaseBdev3 00:09:38.576 BaseBdev4' 00:09:38.576 18:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:38.576 18:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:38.576 18:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:38.576 18:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:38.576 18:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:38.576 18:40:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.576 18:40:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.576 18:40:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.576 18:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:38.576 18:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:38.576 18:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:38.576 18:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:38.576 18:40:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.576 18:40:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.576 18:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:38.576 18:40:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.836 18:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:38.836 18:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:38.836 18:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:38.836 18:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:38.836 18:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.836 18:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:38.836 18:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.836 18:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.836 18:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:38.836 18:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:38.836 18:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:38.836 18:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:09:38.836 18:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.836 18:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.836 18:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:38.836 18:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.837 18:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:38.837 18:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:38.837 18:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:38.837 18:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.837 18:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.837 [2024-12-15 18:40:39.127925] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:38.837 [2024-12-15 18:40:39.127966] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:38.837 [2024-12-15 18:40:39.128017] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:38.837 18:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.837 18:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:38.837 18:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:09:38.837 18:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:38.837 18:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:09:38.837 18:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:38.837 18:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:09:38.837 18:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:38.837 18:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:38.837 18:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:38.837 18:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:38.837 18:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:38.837 18:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:38.837 18:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:38.837 18:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:38.837 18:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:38.837 18:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.837 18:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.837 18:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:38.837 18:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.837 18:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.837 18:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:38.837 "name": "Existed_Raid", 00:09:38.837 "uuid": "4de0053f-2ead-4f89-aecf-59338d8eddd2", 00:09:38.837 "strip_size_kb": 64, 00:09:38.837 "state": "offline", 00:09:38.837 "raid_level": "concat", 00:09:38.837 "superblock": true, 00:09:38.837 "num_base_bdevs": 4, 00:09:38.837 "num_base_bdevs_discovered": 3, 00:09:38.837 "num_base_bdevs_operational": 3, 00:09:38.837 "base_bdevs_list": [ 00:09:38.837 { 00:09:38.837 "name": null, 00:09:38.837 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:38.837 "is_configured": false, 00:09:38.837 "data_offset": 0, 00:09:38.837 "data_size": 63488 00:09:38.837 }, 00:09:38.837 { 00:09:38.837 "name": "BaseBdev2", 00:09:38.837 "uuid": "0e4dc133-de41-4837-a2e2-3a77a4059e8b", 00:09:38.837 "is_configured": true, 00:09:38.837 "data_offset": 2048, 00:09:38.837 "data_size": 63488 00:09:38.837 }, 00:09:38.837 { 00:09:38.837 "name": "BaseBdev3", 00:09:38.837 "uuid": "f9cd6ca4-3107-4e1e-9c1a-6392e254c2c0", 00:09:38.837 "is_configured": true, 00:09:38.837 "data_offset": 2048, 00:09:38.837 "data_size": 63488 00:09:38.837 }, 00:09:38.837 { 00:09:38.837 "name": "BaseBdev4", 00:09:38.837 "uuid": "9102896a-0a3a-4b11-b66f-e867d97f8aca", 00:09:38.837 "is_configured": true, 00:09:38.837 "data_offset": 2048, 00:09:38.837 "data_size": 63488 00:09:38.837 } 00:09:38.837 ] 00:09:38.837 }' 00:09:38.837 18:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:38.837 18:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.407 18:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:39.407 18:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:39.407 18:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.407 18:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.407 18:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.407 18:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:39.407 18:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.407 18:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:39.407 18:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:39.407 18:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:39.407 18:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.407 18:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.407 [2024-12-15 18:40:39.666513] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:39.407 18:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.407 18:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:39.407 18:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:39.407 18:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:39.407 18:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.407 18:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.407 18:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.407 18:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.407 18:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:39.407 18:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:39.407 18:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:39.407 18:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.407 18:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.407 [2024-12-15 18:40:39.725446] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:39.407 18:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.407 18:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:39.407 18:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:39.407 18:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:39.407 18:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.407 18:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.407 18:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.407 18:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.407 18:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:39.407 18:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:39.407 18:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:09:39.407 18:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.407 18:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.407 [2024-12-15 18:40:39.784534] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:09:39.407 [2024-12-15 18:40:39.784580] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:09:39.407 18:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.407 18:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:39.407 18:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:39.407 18:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:39.407 18:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.407 18:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.407 18:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.407 18:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.407 18:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:39.407 18:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:39.407 18:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:09:39.407 18:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:39.407 18:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:39.407 18:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:39.407 18:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.407 18:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.670 BaseBdev2 00:09:39.670 18:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.670 18:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:39.670 18:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:39.670 18:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:39.670 18:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:39.670 18:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:39.670 18:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:39.670 18:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:39.670 18:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.670 18:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.670 18:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.670 18:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:39.670 18:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.670 18:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.670 [ 00:09:39.670 { 00:09:39.670 "name": "BaseBdev2", 00:09:39.670 "aliases": [ 00:09:39.670 "2e03625f-a05c-488a-bd34-fd85c10f3f33" 00:09:39.670 ], 00:09:39.670 "product_name": "Malloc disk", 00:09:39.670 "block_size": 512, 00:09:39.670 "num_blocks": 65536, 00:09:39.670 "uuid": "2e03625f-a05c-488a-bd34-fd85c10f3f33", 00:09:39.670 "assigned_rate_limits": { 00:09:39.670 "rw_ios_per_sec": 0, 00:09:39.670 "rw_mbytes_per_sec": 0, 00:09:39.670 "r_mbytes_per_sec": 0, 00:09:39.670 "w_mbytes_per_sec": 0 00:09:39.670 }, 00:09:39.670 "claimed": false, 00:09:39.670 "zoned": false, 00:09:39.670 "supported_io_types": { 00:09:39.670 "read": true, 00:09:39.670 "write": true, 00:09:39.670 "unmap": true, 00:09:39.670 "flush": true, 00:09:39.670 "reset": true, 00:09:39.670 "nvme_admin": false, 00:09:39.670 "nvme_io": false, 00:09:39.670 "nvme_io_md": false, 00:09:39.670 "write_zeroes": true, 00:09:39.670 "zcopy": true, 00:09:39.670 "get_zone_info": false, 00:09:39.670 "zone_management": false, 00:09:39.670 "zone_append": false, 00:09:39.670 "compare": false, 00:09:39.670 "compare_and_write": false, 00:09:39.670 "abort": true, 00:09:39.670 "seek_hole": false, 00:09:39.670 "seek_data": false, 00:09:39.670 "copy": true, 00:09:39.670 "nvme_iov_md": false 00:09:39.670 }, 00:09:39.670 "memory_domains": [ 00:09:39.670 { 00:09:39.670 "dma_device_id": "system", 00:09:39.670 "dma_device_type": 1 00:09:39.670 }, 00:09:39.670 { 00:09:39.670 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:39.670 "dma_device_type": 2 00:09:39.670 } 00:09:39.670 ], 00:09:39.670 "driver_specific": {} 00:09:39.670 } 00:09:39.670 ] 00:09:39.670 18:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.670 18:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:39.670 18:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:39.670 18:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:39.670 18:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:39.670 18:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.670 18:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.670 BaseBdev3 00:09:39.670 18:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.670 18:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:39.670 18:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:39.670 18:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:39.670 18:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:39.670 18:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:39.670 18:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:39.671 18:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:39.671 18:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.671 18:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.671 18:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.671 18:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:39.671 18:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.671 18:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.671 [ 00:09:39.671 { 00:09:39.671 "name": "BaseBdev3", 00:09:39.671 "aliases": [ 00:09:39.671 "583dee2f-d6b3-4f43-b2e3-ff0befb5440f" 00:09:39.671 ], 00:09:39.671 "product_name": "Malloc disk", 00:09:39.671 "block_size": 512, 00:09:39.671 "num_blocks": 65536, 00:09:39.671 "uuid": "583dee2f-d6b3-4f43-b2e3-ff0befb5440f", 00:09:39.671 "assigned_rate_limits": { 00:09:39.671 "rw_ios_per_sec": 0, 00:09:39.671 "rw_mbytes_per_sec": 0, 00:09:39.671 "r_mbytes_per_sec": 0, 00:09:39.671 "w_mbytes_per_sec": 0 00:09:39.671 }, 00:09:39.671 "claimed": false, 00:09:39.671 "zoned": false, 00:09:39.671 "supported_io_types": { 00:09:39.671 "read": true, 00:09:39.671 "write": true, 00:09:39.671 "unmap": true, 00:09:39.671 "flush": true, 00:09:39.671 "reset": true, 00:09:39.671 "nvme_admin": false, 00:09:39.671 "nvme_io": false, 00:09:39.671 "nvme_io_md": false, 00:09:39.671 "write_zeroes": true, 00:09:39.671 "zcopy": true, 00:09:39.671 "get_zone_info": false, 00:09:39.671 "zone_management": false, 00:09:39.671 "zone_append": false, 00:09:39.671 "compare": false, 00:09:39.671 "compare_and_write": false, 00:09:39.671 "abort": true, 00:09:39.671 "seek_hole": false, 00:09:39.671 "seek_data": false, 00:09:39.671 "copy": true, 00:09:39.671 "nvme_iov_md": false 00:09:39.671 }, 00:09:39.671 "memory_domains": [ 00:09:39.671 { 00:09:39.671 "dma_device_id": "system", 00:09:39.671 "dma_device_type": 1 00:09:39.671 }, 00:09:39.671 { 00:09:39.671 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:39.671 "dma_device_type": 2 00:09:39.671 } 00:09:39.671 ], 00:09:39.671 "driver_specific": {} 00:09:39.671 } 00:09:39.671 ] 00:09:39.671 18:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.671 18:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:39.671 18:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:39.671 18:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:39.671 18:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:09:39.671 18:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.671 18:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.671 BaseBdev4 00:09:39.671 18:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.671 18:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:09:39.671 18:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:09:39.671 18:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:39.671 18:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:39.671 18:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:39.671 18:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:39.671 18:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:39.671 18:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.671 18:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.671 18:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.671 18:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:09:39.671 18:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.671 18:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.671 [ 00:09:39.671 { 00:09:39.671 "name": "BaseBdev4", 00:09:39.671 "aliases": [ 00:09:39.671 "d58d6c17-afa1-42c9-9d90-8f04c707ca2e" 00:09:39.671 ], 00:09:39.671 "product_name": "Malloc disk", 00:09:39.671 "block_size": 512, 00:09:39.671 "num_blocks": 65536, 00:09:39.671 "uuid": "d58d6c17-afa1-42c9-9d90-8f04c707ca2e", 00:09:39.671 "assigned_rate_limits": { 00:09:39.671 "rw_ios_per_sec": 0, 00:09:39.671 "rw_mbytes_per_sec": 0, 00:09:39.671 "r_mbytes_per_sec": 0, 00:09:39.671 "w_mbytes_per_sec": 0 00:09:39.671 }, 00:09:39.671 "claimed": false, 00:09:39.671 "zoned": false, 00:09:39.671 "supported_io_types": { 00:09:39.671 "read": true, 00:09:39.671 "write": true, 00:09:39.671 "unmap": true, 00:09:39.671 "flush": true, 00:09:39.671 "reset": true, 00:09:39.671 "nvme_admin": false, 00:09:39.671 "nvme_io": false, 00:09:39.671 "nvme_io_md": false, 00:09:39.671 "write_zeroes": true, 00:09:39.671 "zcopy": true, 00:09:39.671 "get_zone_info": false, 00:09:39.671 "zone_management": false, 00:09:39.671 "zone_append": false, 00:09:39.671 "compare": false, 00:09:39.671 "compare_and_write": false, 00:09:39.671 "abort": true, 00:09:39.671 "seek_hole": false, 00:09:39.671 "seek_data": false, 00:09:39.671 "copy": true, 00:09:39.671 "nvme_iov_md": false 00:09:39.671 }, 00:09:39.671 "memory_domains": [ 00:09:39.671 { 00:09:39.671 "dma_device_id": "system", 00:09:39.671 "dma_device_type": 1 00:09:39.671 }, 00:09:39.671 { 00:09:39.671 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:39.671 "dma_device_type": 2 00:09:39.671 } 00:09:39.671 ], 00:09:39.671 "driver_specific": {} 00:09:39.671 } 00:09:39.671 ] 00:09:39.671 18:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.671 18:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:39.671 18:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:39.671 18:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:39.671 18:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:39.671 18:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.671 18:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.671 [2024-12-15 18:40:40.002143] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:39.671 [2024-12-15 18:40:40.002187] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:39.671 [2024-12-15 18:40:40.002207] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:39.671 [2024-12-15 18:40:40.004006] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:39.671 [2024-12-15 18:40:40.004056] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:39.671 18:40:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.671 18:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:39.671 18:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:39.671 18:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:39.671 18:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:39.671 18:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:39.671 18:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:39.671 18:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:39.671 18:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:39.671 18:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:39.671 18:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:39.671 18:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.671 18:40:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.671 18:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:39.671 18:40:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.671 18:40:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.672 18:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:39.672 "name": "Existed_Raid", 00:09:39.672 "uuid": "b4f97cc2-8c72-4625-a471-99350d9c6a7b", 00:09:39.672 "strip_size_kb": 64, 00:09:39.672 "state": "configuring", 00:09:39.672 "raid_level": "concat", 00:09:39.672 "superblock": true, 00:09:39.672 "num_base_bdevs": 4, 00:09:39.672 "num_base_bdevs_discovered": 3, 00:09:39.672 "num_base_bdevs_operational": 4, 00:09:39.672 "base_bdevs_list": [ 00:09:39.672 { 00:09:39.672 "name": "BaseBdev1", 00:09:39.672 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:39.672 "is_configured": false, 00:09:39.672 "data_offset": 0, 00:09:39.672 "data_size": 0 00:09:39.672 }, 00:09:39.672 { 00:09:39.672 "name": "BaseBdev2", 00:09:39.672 "uuid": "2e03625f-a05c-488a-bd34-fd85c10f3f33", 00:09:39.672 "is_configured": true, 00:09:39.672 "data_offset": 2048, 00:09:39.672 "data_size": 63488 00:09:39.672 }, 00:09:39.672 { 00:09:39.672 "name": "BaseBdev3", 00:09:39.672 "uuid": "583dee2f-d6b3-4f43-b2e3-ff0befb5440f", 00:09:39.672 "is_configured": true, 00:09:39.672 "data_offset": 2048, 00:09:39.672 "data_size": 63488 00:09:39.672 }, 00:09:39.672 { 00:09:39.672 "name": "BaseBdev4", 00:09:39.672 "uuid": "d58d6c17-afa1-42c9-9d90-8f04c707ca2e", 00:09:39.672 "is_configured": true, 00:09:39.672 "data_offset": 2048, 00:09:39.672 "data_size": 63488 00:09:39.672 } 00:09:39.672 ] 00:09:39.672 }' 00:09:39.672 18:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:39.672 18:40:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.241 18:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:40.241 18:40:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.241 18:40:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.241 [2024-12-15 18:40:40.461402] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:40.241 18:40:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.241 18:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:40.241 18:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:40.241 18:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:40.241 18:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:40.241 18:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:40.241 18:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:40.241 18:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:40.241 18:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:40.241 18:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:40.241 18:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:40.241 18:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.241 18:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:40.241 18:40:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.241 18:40:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.241 18:40:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.241 18:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:40.241 "name": "Existed_Raid", 00:09:40.241 "uuid": "b4f97cc2-8c72-4625-a471-99350d9c6a7b", 00:09:40.241 "strip_size_kb": 64, 00:09:40.241 "state": "configuring", 00:09:40.241 "raid_level": "concat", 00:09:40.241 "superblock": true, 00:09:40.241 "num_base_bdevs": 4, 00:09:40.241 "num_base_bdevs_discovered": 2, 00:09:40.241 "num_base_bdevs_operational": 4, 00:09:40.241 "base_bdevs_list": [ 00:09:40.241 { 00:09:40.241 "name": "BaseBdev1", 00:09:40.241 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:40.241 "is_configured": false, 00:09:40.241 "data_offset": 0, 00:09:40.241 "data_size": 0 00:09:40.241 }, 00:09:40.241 { 00:09:40.241 "name": null, 00:09:40.241 "uuid": "2e03625f-a05c-488a-bd34-fd85c10f3f33", 00:09:40.241 "is_configured": false, 00:09:40.241 "data_offset": 0, 00:09:40.241 "data_size": 63488 00:09:40.241 }, 00:09:40.241 { 00:09:40.241 "name": "BaseBdev3", 00:09:40.241 "uuid": "583dee2f-d6b3-4f43-b2e3-ff0befb5440f", 00:09:40.241 "is_configured": true, 00:09:40.241 "data_offset": 2048, 00:09:40.241 "data_size": 63488 00:09:40.241 }, 00:09:40.241 { 00:09:40.241 "name": "BaseBdev4", 00:09:40.241 "uuid": "d58d6c17-afa1-42c9-9d90-8f04c707ca2e", 00:09:40.241 "is_configured": true, 00:09:40.241 "data_offset": 2048, 00:09:40.241 "data_size": 63488 00:09:40.241 } 00:09:40.241 ] 00:09:40.241 }' 00:09:40.241 18:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:40.241 18:40:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.500 18:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.500 18:40:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.500 18:40:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.500 18:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:40.760 18:40:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.760 18:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:40.760 18:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:40.760 18:40:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.760 18:40:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.760 [2024-12-15 18:40:40.991448] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:40.760 BaseBdev1 00:09:40.760 18:40:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.760 18:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:40.760 18:40:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:40.760 18:40:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:40.760 18:40:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:40.760 18:40:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:40.760 18:40:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:40.760 18:40:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:40.760 18:40:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.760 18:40:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.761 18:40:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.761 18:40:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:40.761 18:40:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.761 18:40:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.761 [ 00:09:40.761 { 00:09:40.761 "name": "BaseBdev1", 00:09:40.761 "aliases": [ 00:09:40.761 "01a8027f-769e-4a3b-ba04-a25f13edf598" 00:09:40.761 ], 00:09:40.761 "product_name": "Malloc disk", 00:09:40.761 "block_size": 512, 00:09:40.761 "num_blocks": 65536, 00:09:40.761 "uuid": "01a8027f-769e-4a3b-ba04-a25f13edf598", 00:09:40.761 "assigned_rate_limits": { 00:09:40.761 "rw_ios_per_sec": 0, 00:09:40.761 "rw_mbytes_per_sec": 0, 00:09:40.761 "r_mbytes_per_sec": 0, 00:09:40.761 "w_mbytes_per_sec": 0 00:09:40.761 }, 00:09:40.761 "claimed": true, 00:09:40.761 "claim_type": "exclusive_write", 00:09:40.761 "zoned": false, 00:09:40.761 "supported_io_types": { 00:09:40.761 "read": true, 00:09:40.761 "write": true, 00:09:40.761 "unmap": true, 00:09:40.761 "flush": true, 00:09:40.761 "reset": true, 00:09:40.761 "nvme_admin": false, 00:09:40.761 "nvme_io": false, 00:09:40.761 "nvme_io_md": false, 00:09:40.761 "write_zeroes": true, 00:09:40.761 "zcopy": true, 00:09:40.761 "get_zone_info": false, 00:09:40.761 "zone_management": false, 00:09:40.761 "zone_append": false, 00:09:40.761 "compare": false, 00:09:40.761 "compare_and_write": false, 00:09:40.761 "abort": true, 00:09:40.761 "seek_hole": false, 00:09:40.761 "seek_data": false, 00:09:40.761 "copy": true, 00:09:40.761 "nvme_iov_md": false 00:09:40.761 }, 00:09:40.761 "memory_domains": [ 00:09:40.761 { 00:09:40.761 "dma_device_id": "system", 00:09:40.761 "dma_device_type": 1 00:09:40.761 }, 00:09:40.761 { 00:09:40.761 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:40.761 "dma_device_type": 2 00:09:40.761 } 00:09:40.761 ], 00:09:40.761 "driver_specific": {} 00:09:40.761 } 00:09:40.761 ] 00:09:40.761 18:40:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.761 18:40:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:40.761 18:40:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:40.761 18:40:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:40.761 18:40:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:40.761 18:40:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:40.761 18:40:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:40.761 18:40:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:40.761 18:40:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:40.761 18:40:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:40.761 18:40:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:40.761 18:40:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:40.761 18:40:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:40.761 18:40:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.761 18:40:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.761 18:40:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.761 18:40:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.761 18:40:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:40.761 "name": "Existed_Raid", 00:09:40.761 "uuid": "b4f97cc2-8c72-4625-a471-99350d9c6a7b", 00:09:40.761 "strip_size_kb": 64, 00:09:40.761 "state": "configuring", 00:09:40.761 "raid_level": "concat", 00:09:40.761 "superblock": true, 00:09:40.761 "num_base_bdevs": 4, 00:09:40.761 "num_base_bdevs_discovered": 3, 00:09:40.761 "num_base_bdevs_operational": 4, 00:09:40.761 "base_bdevs_list": [ 00:09:40.761 { 00:09:40.761 "name": "BaseBdev1", 00:09:40.761 "uuid": "01a8027f-769e-4a3b-ba04-a25f13edf598", 00:09:40.761 "is_configured": true, 00:09:40.761 "data_offset": 2048, 00:09:40.761 "data_size": 63488 00:09:40.761 }, 00:09:40.761 { 00:09:40.761 "name": null, 00:09:40.761 "uuid": "2e03625f-a05c-488a-bd34-fd85c10f3f33", 00:09:40.761 "is_configured": false, 00:09:40.761 "data_offset": 0, 00:09:40.761 "data_size": 63488 00:09:40.761 }, 00:09:40.761 { 00:09:40.761 "name": "BaseBdev3", 00:09:40.761 "uuid": "583dee2f-d6b3-4f43-b2e3-ff0befb5440f", 00:09:40.761 "is_configured": true, 00:09:40.761 "data_offset": 2048, 00:09:40.761 "data_size": 63488 00:09:40.761 }, 00:09:40.761 { 00:09:40.761 "name": "BaseBdev4", 00:09:40.761 "uuid": "d58d6c17-afa1-42c9-9d90-8f04c707ca2e", 00:09:40.761 "is_configured": true, 00:09:40.761 "data_offset": 2048, 00:09:40.761 "data_size": 63488 00:09:40.761 } 00:09:40.761 ] 00:09:40.761 }' 00:09:40.761 18:40:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:40.761 18:40:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.332 18:40:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.332 18:40:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.332 18:40:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:41.332 18:40:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.332 18:40:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.332 18:40:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:41.332 18:40:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:41.332 18:40:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.332 18:40:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.332 [2024-12-15 18:40:41.514614] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:41.332 18:40:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.332 18:40:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:41.332 18:40:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:41.332 18:40:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:41.332 18:40:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:41.332 18:40:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:41.332 18:40:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:41.332 18:40:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:41.332 18:40:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:41.332 18:40:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:41.332 18:40:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:41.332 18:40:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.332 18:40:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:41.332 18:40:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.332 18:40:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.332 18:40:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.332 18:40:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:41.332 "name": "Existed_Raid", 00:09:41.332 "uuid": "b4f97cc2-8c72-4625-a471-99350d9c6a7b", 00:09:41.332 "strip_size_kb": 64, 00:09:41.332 "state": "configuring", 00:09:41.332 "raid_level": "concat", 00:09:41.332 "superblock": true, 00:09:41.332 "num_base_bdevs": 4, 00:09:41.332 "num_base_bdevs_discovered": 2, 00:09:41.332 "num_base_bdevs_operational": 4, 00:09:41.332 "base_bdevs_list": [ 00:09:41.332 { 00:09:41.332 "name": "BaseBdev1", 00:09:41.332 "uuid": "01a8027f-769e-4a3b-ba04-a25f13edf598", 00:09:41.332 "is_configured": true, 00:09:41.332 "data_offset": 2048, 00:09:41.332 "data_size": 63488 00:09:41.332 }, 00:09:41.332 { 00:09:41.332 "name": null, 00:09:41.332 "uuid": "2e03625f-a05c-488a-bd34-fd85c10f3f33", 00:09:41.332 "is_configured": false, 00:09:41.332 "data_offset": 0, 00:09:41.332 "data_size": 63488 00:09:41.332 }, 00:09:41.332 { 00:09:41.332 "name": null, 00:09:41.332 "uuid": "583dee2f-d6b3-4f43-b2e3-ff0befb5440f", 00:09:41.332 "is_configured": false, 00:09:41.332 "data_offset": 0, 00:09:41.332 "data_size": 63488 00:09:41.332 }, 00:09:41.332 { 00:09:41.332 "name": "BaseBdev4", 00:09:41.332 "uuid": "d58d6c17-afa1-42c9-9d90-8f04c707ca2e", 00:09:41.332 "is_configured": true, 00:09:41.332 "data_offset": 2048, 00:09:41.332 "data_size": 63488 00:09:41.332 } 00:09:41.332 ] 00:09:41.332 }' 00:09:41.332 18:40:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:41.332 18:40:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.593 18:40:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.593 18:40:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.593 18:40:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.593 18:40:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:41.593 18:40:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.593 18:40:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:41.593 18:40:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:41.593 18:40:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.593 18:40:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.593 [2024-12-15 18:40:41.977964] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:41.593 18:40:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.593 18:40:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:41.593 18:40:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:41.593 18:40:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:41.593 18:40:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:41.593 18:40:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:41.593 18:40:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:41.593 18:40:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:41.593 18:40:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:41.593 18:40:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:41.593 18:40:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:41.593 18:40:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.593 18:40:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:41.593 18:40:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.593 18:40:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.593 18:40:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.854 18:40:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:41.854 "name": "Existed_Raid", 00:09:41.854 "uuid": "b4f97cc2-8c72-4625-a471-99350d9c6a7b", 00:09:41.854 "strip_size_kb": 64, 00:09:41.855 "state": "configuring", 00:09:41.855 "raid_level": "concat", 00:09:41.855 "superblock": true, 00:09:41.855 "num_base_bdevs": 4, 00:09:41.855 "num_base_bdevs_discovered": 3, 00:09:41.855 "num_base_bdevs_operational": 4, 00:09:41.855 "base_bdevs_list": [ 00:09:41.855 { 00:09:41.855 "name": "BaseBdev1", 00:09:41.855 "uuid": "01a8027f-769e-4a3b-ba04-a25f13edf598", 00:09:41.855 "is_configured": true, 00:09:41.855 "data_offset": 2048, 00:09:41.855 "data_size": 63488 00:09:41.855 }, 00:09:41.855 { 00:09:41.855 "name": null, 00:09:41.855 "uuid": "2e03625f-a05c-488a-bd34-fd85c10f3f33", 00:09:41.855 "is_configured": false, 00:09:41.855 "data_offset": 0, 00:09:41.855 "data_size": 63488 00:09:41.855 }, 00:09:41.855 { 00:09:41.855 "name": "BaseBdev3", 00:09:41.855 "uuid": "583dee2f-d6b3-4f43-b2e3-ff0befb5440f", 00:09:41.855 "is_configured": true, 00:09:41.855 "data_offset": 2048, 00:09:41.855 "data_size": 63488 00:09:41.855 }, 00:09:41.855 { 00:09:41.855 "name": "BaseBdev4", 00:09:41.855 "uuid": "d58d6c17-afa1-42c9-9d90-8f04c707ca2e", 00:09:41.855 "is_configured": true, 00:09:41.855 "data_offset": 2048, 00:09:41.855 "data_size": 63488 00:09:41.855 } 00:09:41.855 ] 00:09:41.855 }' 00:09:41.855 18:40:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:41.855 18:40:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.115 18:40:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:42.115 18:40:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.115 18:40:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.115 18:40:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:42.115 18:40:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.115 18:40:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:42.115 18:40:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:42.115 18:40:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.115 18:40:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.115 [2024-12-15 18:40:42.473065] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:42.116 18:40:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.116 18:40:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:42.116 18:40:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:42.116 18:40:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:42.116 18:40:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:42.116 18:40:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:42.116 18:40:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:42.116 18:40:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:42.116 18:40:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:42.116 18:40:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:42.116 18:40:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:42.116 18:40:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:42.116 18:40:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:42.116 18:40:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.116 18:40:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.116 18:40:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.116 18:40:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:42.116 "name": "Existed_Raid", 00:09:42.116 "uuid": "b4f97cc2-8c72-4625-a471-99350d9c6a7b", 00:09:42.116 "strip_size_kb": 64, 00:09:42.116 "state": "configuring", 00:09:42.116 "raid_level": "concat", 00:09:42.116 "superblock": true, 00:09:42.116 "num_base_bdevs": 4, 00:09:42.116 "num_base_bdevs_discovered": 2, 00:09:42.116 "num_base_bdevs_operational": 4, 00:09:42.116 "base_bdevs_list": [ 00:09:42.116 { 00:09:42.116 "name": null, 00:09:42.116 "uuid": "01a8027f-769e-4a3b-ba04-a25f13edf598", 00:09:42.116 "is_configured": false, 00:09:42.116 "data_offset": 0, 00:09:42.116 "data_size": 63488 00:09:42.116 }, 00:09:42.116 { 00:09:42.116 "name": null, 00:09:42.116 "uuid": "2e03625f-a05c-488a-bd34-fd85c10f3f33", 00:09:42.116 "is_configured": false, 00:09:42.116 "data_offset": 0, 00:09:42.116 "data_size": 63488 00:09:42.116 }, 00:09:42.116 { 00:09:42.116 "name": "BaseBdev3", 00:09:42.116 "uuid": "583dee2f-d6b3-4f43-b2e3-ff0befb5440f", 00:09:42.116 "is_configured": true, 00:09:42.116 "data_offset": 2048, 00:09:42.116 "data_size": 63488 00:09:42.116 }, 00:09:42.116 { 00:09:42.116 "name": "BaseBdev4", 00:09:42.116 "uuid": "d58d6c17-afa1-42c9-9d90-8f04c707ca2e", 00:09:42.116 "is_configured": true, 00:09:42.116 "data_offset": 2048, 00:09:42.116 "data_size": 63488 00:09:42.116 } 00:09:42.116 ] 00:09:42.116 }' 00:09:42.116 18:40:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:42.116 18:40:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.687 18:40:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:42.687 18:40:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:42.687 18:40:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.687 18:40:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.687 18:40:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.687 18:40:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:42.687 18:40:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:42.687 18:40:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.687 18:40:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.687 [2024-12-15 18:40:42.986830] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:42.688 18:40:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.688 18:40:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:42.688 18:40:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:42.688 18:40:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:42.688 18:40:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:42.688 18:40:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:42.688 18:40:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:42.688 18:40:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:42.688 18:40:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:42.688 18:40:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:42.688 18:40:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:42.688 18:40:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:42.688 18:40:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:42.688 18:40:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.688 18:40:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.688 18:40:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.688 18:40:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:42.688 "name": "Existed_Raid", 00:09:42.688 "uuid": "b4f97cc2-8c72-4625-a471-99350d9c6a7b", 00:09:42.688 "strip_size_kb": 64, 00:09:42.688 "state": "configuring", 00:09:42.688 "raid_level": "concat", 00:09:42.688 "superblock": true, 00:09:42.688 "num_base_bdevs": 4, 00:09:42.688 "num_base_bdevs_discovered": 3, 00:09:42.688 "num_base_bdevs_operational": 4, 00:09:42.688 "base_bdevs_list": [ 00:09:42.688 { 00:09:42.688 "name": null, 00:09:42.688 "uuid": "01a8027f-769e-4a3b-ba04-a25f13edf598", 00:09:42.688 "is_configured": false, 00:09:42.688 "data_offset": 0, 00:09:42.688 "data_size": 63488 00:09:42.688 }, 00:09:42.688 { 00:09:42.688 "name": "BaseBdev2", 00:09:42.688 "uuid": "2e03625f-a05c-488a-bd34-fd85c10f3f33", 00:09:42.688 "is_configured": true, 00:09:42.688 "data_offset": 2048, 00:09:42.688 "data_size": 63488 00:09:42.688 }, 00:09:42.688 { 00:09:42.688 "name": "BaseBdev3", 00:09:42.688 "uuid": "583dee2f-d6b3-4f43-b2e3-ff0befb5440f", 00:09:42.688 "is_configured": true, 00:09:42.688 "data_offset": 2048, 00:09:42.688 "data_size": 63488 00:09:42.688 }, 00:09:42.688 { 00:09:42.688 "name": "BaseBdev4", 00:09:42.688 "uuid": "d58d6c17-afa1-42c9-9d90-8f04c707ca2e", 00:09:42.688 "is_configured": true, 00:09:42.688 "data_offset": 2048, 00:09:42.688 "data_size": 63488 00:09:42.688 } 00:09:42.688 ] 00:09:42.688 }' 00:09:42.688 18:40:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:42.688 18:40:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.258 18:40:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:43.258 18:40:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.258 18:40:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:43.258 18:40:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.258 18:40:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.258 18:40:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:43.258 18:40:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:43.258 18:40:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.258 18:40:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.259 18:40:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:43.259 18:40:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.259 18:40:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 01a8027f-769e-4a3b-ba04-a25f13edf598 00:09:43.259 18:40:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.259 18:40:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.259 [2024-12-15 18:40:43.528970] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:43.259 [2024-12-15 18:40:43.529244] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:09:43.259 [2024-12-15 18:40:43.529281] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:09:43.259 [2024-12-15 18:40:43.529589] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:43.259 NewBaseBdev 00:09:43.259 [2024-12-15 18:40:43.529740] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:09:43.259 [2024-12-15 18:40:43.529783] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:09:43.259 [2024-12-15 18:40:43.529967] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:43.259 18:40:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.259 18:40:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:43.259 18:40:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:09:43.259 18:40:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:43.259 18:40:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:43.259 18:40:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:43.259 18:40:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:43.259 18:40:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:43.259 18:40:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.259 18:40:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.259 18:40:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.259 18:40:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:43.259 18:40:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.259 18:40:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.259 [ 00:09:43.259 { 00:09:43.259 "name": "NewBaseBdev", 00:09:43.259 "aliases": [ 00:09:43.259 "01a8027f-769e-4a3b-ba04-a25f13edf598" 00:09:43.259 ], 00:09:43.259 "product_name": "Malloc disk", 00:09:43.259 "block_size": 512, 00:09:43.259 "num_blocks": 65536, 00:09:43.259 "uuid": "01a8027f-769e-4a3b-ba04-a25f13edf598", 00:09:43.259 "assigned_rate_limits": { 00:09:43.259 "rw_ios_per_sec": 0, 00:09:43.259 "rw_mbytes_per_sec": 0, 00:09:43.259 "r_mbytes_per_sec": 0, 00:09:43.259 "w_mbytes_per_sec": 0 00:09:43.259 }, 00:09:43.259 "claimed": true, 00:09:43.259 "claim_type": "exclusive_write", 00:09:43.259 "zoned": false, 00:09:43.259 "supported_io_types": { 00:09:43.259 "read": true, 00:09:43.259 "write": true, 00:09:43.259 "unmap": true, 00:09:43.259 "flush": true, 00:09:43.259 "reset": true, 00:09:43.259 "nvme_admin": false, 00:09:43.259 "nvme_io": false, 00:09:43.259 "nvme_io_md": false, 00:09:43.259 "write_zeroes": true, 00:09:43.259 "zcopy": true, 00:09:43.259 "get_zone_info": false, 00:09:43.259 "zone_management": false, 00:09:43.259 "zone_append": false, 00:09:43.259 "compare": false, 00:09:43.259 "compare_and_write": false, 00:09:43.259 "abort": true, 00:09:43.259 "seek_hole": false, 00:09:43.259 "seek_data": false, 00:09:43.259 "copy": true, 00:09:43.259 "nvme_iov_md": false 00:09:43.259 }, 00:09:43.259 "memory_domains": [ 00:09:43.259 { 00:09:43.259 "dma_device_id": "system", 00:09:43.259 "dma_device_type": 1 00:09:43.259 }, 00:09:43.259 { 00:09:43.259 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:43.259 "dma_device_type": 2 00:09:43.259 } 00:09:43.259 ], 00:09:43.259 "driver_specific": {} 00:09:43.259 } 00:09:43.259 ] 00:09:43.259 18:40:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.259 18:40:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:43.259 18:40:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:09:43.259 18:40:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:43.259 18:40:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:43.259 18:40:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:43.259 18:40:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:43.259 18:40:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:43.259 18:40:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:43.259 18:40:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:43.259 18:40:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:43.259 18:40:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:43.259 18:40:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:43.259 18:40:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.259 18:40:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.259 18:40:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:43.259 18:40:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.259 18:40:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:43.259 "name": "Existed_Raid", 00:09:43.259 "uuid": "b4f97cc2-8c72-4625-a471-99350d9c6a7b", 00:09:43.259 "strip_size_kb": 64, 00:09:43.259 "state": "online", 00:09:43.259 "raid_level": "concat", 00:09:43.259 "superblock": true, 00:09:43.259 "num_base_bdevs": 4, 00:09:43.259 "num_base_bdevs_discovered": 4, 00:09:43.259 "num_base_bdevs_operational": 4, 00:09:43.259 "base_bdevs_list": [ 00:09:43.259 { 00:09:43.259 "name": "NewBaseBdev", 00:09:43.259 "uuid": "01a8027f-769e-4a3b-ba04-a25f13edf598", 00:09:43.259 "is_configured": true, 00:09:43.259 "data_offset": 2048, 00:09:43.259 "data_size": 63488 00:09:43.259 }, 00:09:43.259 { 00:09:43.259 "name": "BaseBdev2", 00:09:43.259 "uuid": "2e03625f-a05c-488a-bd34-fd85c10f3f33", 00:09:43.259 "is_configured": true, 00:09:43.259 "data_offset": 2048, 00:09:43.259 "data_size": 63488 00:09:43.259 }, 00:09:43.259 { 00:09:43.259 "name": "BaseBdev3", 00:09:43.259 "uuid": "583dee2f-d6b3-4f43-b2e3-ff0befb5440f", 00:09:43.259 "is_configured": true, 00:09:43.259 "data_offset": 2048, 00:09:43.259 "data_size": 63488 00:09:43.259 }, 00:09:43.259 { 00:09:43.259 "name": "BaseBdev4", 00:09:43.259 "uuid": "d58d6c17-afa1-42c9-9d90-8f04c707ca2e", 00:09:43.259 "is_configured": true, 00:09:43.259 "data_offset": 2048, 00:09:43.259 "data_size": 63488 00:09:43.259 } 00:09:43.259 ] 00:09:43.259 }' 00:09:43.259 18:40:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:43.259 18:40:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.830 18:40:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:43.830 18:40:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:43.830 18:40:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:43.830 18:40:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:43.830 18:40:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:43.830 18:40:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:43.830 18:40:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:43.830 18:40:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:43.830 18:40:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.830 18:40:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.830 [2024-12-15 18:40:44.048599] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:43.830 18:40:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.830 18:40:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:43.830 "name": "Existed_Raid", 00:09:43.830 "aliases": [ 00:09:43.830 "b4f97cc2-8c72-4625-a471-99350d9c6a7b" 00:09:43.830 ], 00:09:43.830 "product_name": "Raid Volume", 00:09:43.830 "block_size": 512, 00:09:43.830 "num_blocks": 253952, 00:09:43.830 "uuid": "b4f97cc2-8c72-4625-a471-99350d9c6a7b", 00:09:43.830 "assigned_rate_limits": { 00:09:43.830 "rw_ios_per_sec": 0, 00:09:43.830 "rw_mbytes_per_sec": 0, 00:09:43.830 "r_mbytes_per_sec": 0, 00:09:43.830 "w_mbytes_per_sec": 0 00:09:43.830 }, 00:09:43.830 "claimed": false, 00:09:43.830 "zoned": false, 00:09:43.830 "supported_io_types": { 00:09:43.830 "read": true, 00:09:43.830 "write": true, 00:09:43.830 "unmap": true, 00:09:43.830 "flush": true, 00:09:43.830 "reset": true, 00:09:43.830 "nvme_admin": false, 00:09:43.830 "nvme_io": false, 00:09:43.830 "nvme_io_md": false, 00:09:43.830 "write_zeroes": true, 00:09:43.830 "zcopy": false, 00:09:43.830 "get_zone_info": false, 00:09:43.830 "zone_management": false, 00:09:43.830 "zone_append": false, 00:09:43.830 "compare": false, 00:09:43.830 "compare_and_write": false, 00:09:43.830 "abort": false, 00:09:43.830 "seek_hole": false, 00:09:43.830 "seek_data": false, 00:09:43.830 "copy": false, 00:09:43.830 "nvme_iov_md": false 00:09:43.830 }, 00:09:43.830 "memory_domains": [ 00:09:43.830 { 00:09:43.830 "dma_device_id": "system", 00:09:43.830 "dma_device_type": 1 00:09:43.830 }, 00:09:43.830 { 00:09:43.830 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:43.830 "dma_device_type": 2 00:09:43.830 }, 00:09:43.830 { 00:09:43.830 "dma_device_id": "system", 00:09:43.830 "dma_device_type": 1 00:09:43.830 }, 00:09:43.830 { 00:09:43.830 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:43.830 "dma_device_type": 2 00:09:43.830 }, 00:09:43.830 { 00:09:43.830 "dma_device_id": "system", 00:09:43.830 "dma_device_type": 1 00:09:43.830 }, 00:09:43.830 { 00:09:43.830 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:43.830 "dma_device_type": 2 00:09:43.830 }, 00:09:43.830 { 00:09:43.830 "dma_device_id": "system", 00:09:43.830 "dma_device_type": 1 00:09:43.830 }, 00:09:43.830 { 00:09:43.830 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:43.830 "dma_device_type": 2 00:09:43.830 } 00:09:43.830 ], 00:09:43.830 "driver_specific": { 00:09:43.830 "raid": { 00:09:43.830 "uuid": "b4f97cc2-8c72-4625-a471-99350d9c6a7b", 00:09:43.830 "strip_size_kb": 64, 00:09:43.830 "state": "online", 00:09:43.830 "raid_level": "concat", 00:09:43.830 "superblock": true, 00:09:43.830 "num_base_bdevs": 4, 00:09:43.830 "num_base_bdevs_discovered": 4, 00:09:43.830 "num_base_bdevs_operational": 4, 00:09:43.830 "base_bdevs_list": [ 00:09:43.830 { 00:09:43.830 "name": "NewBaseBdev", 00:09:43.830 "uuid": "01a8027f-769e-4a3b-ba04-a25f13edf598", 00:09:43.830 "is_configured": true, 00:09:43.830 "data_offset": 2048, 00:09:43.830 "data_size": 63488 00:09:43.830 }, 00:09:43.830 { 00:09:43.830 "name": "BaseBdev2", 00:09:43.830 "uuid": "2e03625f-a05c-488a-bd34-fd85c10f3f33", 00:09:43.830 "is_configured": true, 00:09:43.830 "data_offset": 2048, 00:09:43.830 "data_size": 63488 00:09:43.830 }, 00:09:43.830 { 00:09:43.830 "name": "BaseBdev3", 00:09:43.830 "uuid": "583dee2f-d6b3-4f43-b2e3-ff0befb5440f", 00:09:43.830 "is_configured": true, 00:09:43.830 "data_offset": 2048, 00:09:43.830 "data_size": 63488 00:09:43.830 }, 00:09:43.830 { 00:09:43.830 "name": "BaseBdev4", 00:09:43.830 "uuid": "d58d6c17-afa1-42c9-9d90-8f04c707ca2e", 00:09:43.830 "is_configured": true, 00:09:43.830 "data_offset": 2048, 00:09:43.830 "data_size": 63488 00:09:43.830 } 00:09:43.830 ] 00:09:43.830 } 00:09:43.830 } 00:09:43.830 }' 00:09:43.830 18:40:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:43.830 18:40:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:43.830 BaseBdev2 00:09:43.830 BaseBdev3 00:09:43.830 BaseBdev4' 00:09:43.830 18:40:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:43.830 18:40:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:43.830 18:40:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:43.830 18:40:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:43.830 18:40:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:43.830 18:40:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.830 18:40:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.830 18:40:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.830 18:40:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:43.830 18:40:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:43.830 18:40:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:43.830 18:40:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:43.830 18:40:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:43.830 18:40:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.830 18:40:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.830 18:40:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.091 18:40:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:44.091 18:40:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:44.091 18:40:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:44.091 18:40:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:44.091 18:40:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:44.091 18:40:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.091 18:40:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.091 18:40:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.091 18:40:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:44.091 18:40:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:44.091 18:40:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:44.091 18:40:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:09:44.091 18:40:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.091 18:40:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.091 18:40:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:44.091 18:40:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.091 18:40:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:44.091 18:40:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:44.091 18:40:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:44.091 18:40:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.091 18:40:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.091 [2024-12-15 18:40:44.375610] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:44.091 [2024-12-15 18:40:44.375684] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:44.091 [2024-12-15 18:40:44.375781] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:44.091 [2024-12-15 18:40:44.375882] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:44.091 [2024-12-15 18:40:44.375926] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:09:44.091 18:40:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.091 18:40:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 84738 00:09:44.091 18:40:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 84738 ']' 00:09:44.092 18:40:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 84738 00:09:44.092 18:40:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:09:44.092 18:40:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:44.092 18:40:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84738 00:09:44.092 killing process with pid 84738 00:09:44.092 18:40:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:44.092 18:40:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:44.092 18:40:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84738' 00:09:44.092 18:40:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 84738 00:09:44.092 [2024-12-15 18:40:44.413365] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:44.092 18:40:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 84738 00:09:44.092 [2024-12-15 18:40:44.454322] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:44.352 18:40:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:44.352 00:09:44.352 real 0m9.690s 00:09:44.352 user 0m16.516s 00:09:44.352 sys 0m2.132s 00:09:44.352 ************************************ 00:09:44.352 END TEST raid_state_function_test_sb 00:09:44.352 ************************************ 00:09:44.352 18:40:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:44.352 18:40:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.352 18:40:44 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:09:44.352 18:40:44 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:44.352 18:40:44 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:44.352 18:40:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:44.352 ************************************ 00:09:44.352 START TEST raid_superblock_test 00:09:44.352 ************************************ 00:09:44.352 18:40:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 4 00:09:44.352 18:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:09:44.352 18:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:09:44.352 18:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:44.352 18:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:44.352 18:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:44.352 18:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:44.352 18:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:44.352 18:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:44.352 18:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:44.352 18:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:44.352 18:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:44.352 18:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:44.352 18:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:44.352 18:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:09:44.352 18:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:09:44.352 18:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:09:44.352 18:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:44.352 18:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=85391 00:09:44.352 18:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 85391 00:09:44.352 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:44.352 18:40:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 85391 ']' 00:09:44.352 18:40:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:44.352 18:40:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:44.353 18:40:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:44.353 18:40:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:44.353 18:40:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.614 [2024-12-15 18:40:44.845754] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:09:44.614 [2024-12-15 18:40:44.845958] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85391 ] 00:09:44.614 [2024-12-15 18:40:45.033704] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:44.876 [2024-12-15 18:40:45.062455] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:44.876 [2024-12-15 18:40:45.105548] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:44.876 [2024-12-15 18:40:45.105614] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:45.444 18:40:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:45.444 18:40:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:09:45.444 18:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:45.444 18:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:45.444 18:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:45.445 18:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:45.445 18:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:45.445 18:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:45.445 18:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:45.445 18:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:45.445 18:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:45.445 18:40:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.445 18:40:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.445 malloc1 00:09:45.445 18:40:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.445 18:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:45.445 18:40:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.445 18:40:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.445 [2024-12-15 18:40:45.689982] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:45.445 [2024-12-15 18:40:45.690089] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:45.445 [2024-12-15 18:40:45.690126] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:45.445 [2024-12-15 18:40:45.690160] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:45.445 [2024-12-15 18:40:45.692276] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:45.445 [2024-12-15 18:40:45.692364] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:45.445 pt1 00:09:45.445 18:40:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.445 18:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:45.445 18:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:45.445 18:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:45.445 18:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:45.445 18:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:45.445 18:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:45.445 18:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:45.445 18:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:45.445 18:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:45.445 18:40:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.445 18:40:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.445 malloc2 00:09:45.445 18:40:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.445 18:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:45.445 18:40:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.445 18:40:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.445 [2024-12-15 18:40:45.722702] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:45.445 [2024-12-15 18:40:45.722797] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:45.445 [2024-12-15 18:40:45.722839] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:45.445 [2024-12-15 18:40:45.722869] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:45.445 [2024-12-15 18:40:45.724935] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:45.445 [2024-12-15 18:40:45.725006] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:45.445 pt2 00:09:45.445 18:40:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.445 18:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:45.445 18:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:45.445 18:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:09:45.445 18:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:09:45.445 18:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:09:45.445 18:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:45.445 18:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:45.445 18:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:45.445 18:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:09:45.445 18:40:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.445 18:40:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.445 malloc3 00:09:45.445 18:40:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.445 18:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:45.445 18:40:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.445 18:40:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.445 [2024-12-15 18:40:45.751368] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:45.445 [2024-12-15 18:40:45.751459] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:45.445 [2024-12-15 18:40:45.751496] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:45.445 [2024-12-15 18:40:45.751525] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:45.445 [2024-12-15 18:40:45.753580] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:45.445 [2024-12-15 18:40:45.753655] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:45.445 pt3 00:09:45.445 18:40:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.445 18:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:45.445 18:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:45.445 18:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:09:45.445 18:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:09:45.445 18:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:09:45.445 18:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:45.445 18:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:45.445 18:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:45.445 18:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:09:45.445 18:40:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.445 18:40:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.445 malloc4 00:09:45.445 18:40:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.445 18:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:09:45.445 18:40:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.445 18:40:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.445 [2024-12-15 18:40:45.790216] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:09:45.445 [2024-12-15 18:40:45.790322] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:45.445 [2024-12-15 18:40:45.790361] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:45.445 [2024-12-15 18:40:45.790397] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:45.445 [2024-12-15 18:40:45.792626] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:45.445 [2024-12-15 18:40:45.792703] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:09:45.445 pt4 00:09:45.445 18:40:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.445 18:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:45.445 18:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:45.445 18:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:09:45.445 18:40:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.445 18:40:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.445 [2024-12-15 18:40:45.802281] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:45.445 [2024-12-15 18:40:45.804273] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:45.445 [2024-12-15 18:40:45.804401] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:45.445 [2024-12-15 18:40:45.804480] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:09:45.445 [2024-12-15 18:40:45.804668] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:09:45.445 [2024-12-15 18:40:45.804719] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:09:45.445 [2024-12-15 18:40:45.805003] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:45.445 [2024-12-15 18:40:45.805191] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:09:45.445 [2024-12-15 18:40:45.805240] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:09:45.445 [2024-12-15 18:40:45.805405] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:45.445 18:40:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.445 18:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:09:45.445 18:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:45.445 18:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:45.445 18:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:45.445 18:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:45.445 18:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:45.445 18:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:45.445 18:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:45.445 18:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:45.445 18:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:45.445 18:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:45.446 18:40:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.446 18:40:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.446 18:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:45.446 18:40:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.446 18:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:45.446 "name": "raid_bdev1", 00:09:45.446 "uuid": "bc35ac11-97ba-4c2a-97c5-c3417666e196", 00:09:45.446 "strip_size_kb": 64, 00:09:45.446 "state": "online", 00:09:45.446 "raid_level": "concat", 00:09:45.446 "superblock": true, 00:09:45.446 "num_base_bdevs": 4, 00:09:45.446 "num_base_bdevs_discovered": 4, 00:09:45.446 "num_base_bdevs_operational": 4, 00:09:45.446 "base_bdevs_list": [ 00:09:45.446 { 00:09:45.446 "name": "pt1", 00:09:45.446 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:45.446 "is_configured": true, 00:09:45.446 "data_offset": 2048, 00:09:45.446 "data_size": 63488 00:09:45.446 }, 00:09:45.446 { 00:09:45.446 "name": "pt2", 00:09:45.446 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:45.446 "is_configured": true, 00:09:45.446 "data_offset": 2048, 00:09:45.446 "data_size": 63488 00:09:45.446 }, 00:09:45.446 { 00:09:45.446 "name": "pt3", 00:09:45.446 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:45.446 "is_configured": true, 00:09:45.446 "data_offset": 2048, 00:09:45.446 "data_size": 63488 00:09:45.446 }, 00:09:45.446 { 00:09:45.446 "name": "pt4", 00:09:45.446 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:45.446 "is_configured": true, 00:09:45.446 "data_offset": 2048, 00:09:45.446 "data_size": 63488 00:09:45.446 } 00:09:45.446 ] 00:09:45.446 }' 00:09:45.446 18:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:45.446 18:40:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.015 18:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:46.016 18:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:46.016 18:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:46.016 18:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:46.016 18:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:46.016 18:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:46.016 18:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:46.016 18:40:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.016 18:40:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.016 18:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:46.016 [2024-12-15 18:40:46.181982] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:46.016 18:40:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.016 18:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:46.016 "name": "raid_bdev1", 00:09:46.016 "aliases": [ 00:09:46.016 "bc35ac11-97ba-4c2a-97c5-c3417666e196" 00:09:46.016 ], 00:09:46.016 "product_name": "Raid Volume", 00:09:46.016 "block_size": 512, 00:09:46.016 "num_blocks": 253952, 00:09:46.016 "uuid": "bc35ac11-97ba-4c2a-97c5-c3417666e196", 00:09:46.016 "assigned_rate_limits": { 00:09:46.016 "rw_ios_per_sec": 0, 00:09:46.016 "rw_mbytes_per_sec": 0, 00:09:46.016 "r_mbytes_per_sec": 0, 00:09:46.016 "w_mbytes_per_sec": 0 00:09:46.016 }, 00:09:46.016 "claimed": false, 00:09:46.016 "zoned": false, 00:09:46.016 "supported_io_types": { 00:09:46.016 "read": true, 00:09:46.016 "write": true, 00:09:46.016 "unmap": true, 00:09:46.016 "flush": true, 00:09:46.016 "reset": true, 00:09:46.016 "nvme_admin": false, 00:09:46.016 "nvme_io": false, 00:09:46.016 "nvme_io_md": false, 00:09:46.016 "write_zeroes": true, 00:09:46.016 "zcopy": false, 00:09:46.016 "get_zone_info": false, 00:09:46.016 "zone_management": false, 00:09:46.016 "zone_append": false, 00:09:46.016 "compare": false, 00:09:46.016 "compare_and_write": false, 00:09:46.016 "abort": false, 00:09:46.016 "seek_hole": false, 00:09:46.016 "seek_data": false, 00:09:46.016 "copy": false, 00:09:46.016 "nvme_iov_md": false 00:09:46.016 }, 00:09:46.016 "memory_domains": [ 00:09:46.016 { 00:09:46.016 "dma_device_id": "system", 00:09:46.016 "dma_device_type": 1 00:09:46.016 }, 00:09:46.016 { 00:09:46.016 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:46.016 "dma_device_type": 2 00:09:46.016 }, 00:09:46.016 { 00:09:46.016 "dma_device_id": "system", 00:09:46.016 "dma_device_type": 1 00:09:46.016 }, 00:09:46.016 { 00:09:46.016 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:46.016 "dma_device_type": 2 00:09:46.016 }, 00:09:46.016 { 00:09:46.016 "dma_device_id": "system", 00:09:46.016 "dma_device_type": 1 00:09:46.016 }, 00:09:46.016 { 00:09:46.016 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:46.016 "dma_device_type": 2 00:09:46.016 }, 00:09:46.016 { 00:09:46.016 "dma_device_id": "system", 00:09:46.016 "dma_device_type": 1 00:09:46.016 }, 00:09:46.016 { 00:09:46.016 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:46.016 "dma_device_type": 2 00:09:46.016 } 00:09:46.016 ], 00:09:46.016 "driver_specific": { 00:09:46.016 "raid": { 00:09:46.016 "uuid": "bc35ac11-97ba-4c2a-97c5-c3417666e196", 00:09:46.016 "strip_size_kb": 64, 00:09:46.016 "state": "online", 00:09:46.016 "raid_level": "concat", 00:09:46.016 "superblock": true, 00:09:46.016 "num_base_bdevs": 4, 00:09:46.016 "num_base_bdevs_discovered": 4, 00:09:46.016 "num_base_bdevs_operational": 4, 00:09:46.016 "base_bdevs_list": [ 00:09:46.016 { 00:09:46.016 "name": "pt1", 00:09:46.016 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:46.016 "is_configured": true, 00:09:46.016 "data_offset": 2048, 00:09:46.016 "data_size": 63488 00:09:46.016 }, 00:09:46.016 { 00:09:46.016 "name": "pt2", 00:09:46.016 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:46.016 "is_configured": true, 00:09:46.016 "data_offset": 2048, 00:09:46.016 "data_size": 63488 00:09:46.016 }, 00:09:46.016 { 00:09:46.016 "name": "pt3", 00:09:46.016 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:46.016 "is_configured": true, 00:09:46.016 "data_offset": 2048, 00:09:46.016 "data_size": 63488 00:09:46.016 }, 00:09:46.016 { 00:09:46.016 "name": "pt4", 00:09:46.016 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:46.016 "is_configured": true, 00:09:46.016 "data_offset": 2048, 00:09:46.016 "data_size": 63488 00:09:46.016 } 00:09:46.016 ] 00:09:46.016 } 00:09:46.016 } 00:09:46.016 }' 00:09:46.016 18:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:46.016 18:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:46.016 pt2 00:09:46.016 pt3 00:09:46.016 pt4' 00:09:46.016 18:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:46.016 18:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:46.016 18:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:46.016 18:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:46.016 18:40:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.016 18:40:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.016 18:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:46.016 18:40:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.016 18:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:46.016 18:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:46.016 18:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:46.016 18:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:46.016 18:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:46.016 18:40:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.016 18:40:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.016 18:40:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.016 18:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:46.016 18:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:46.016 18:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:46.016 18:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:46.016 18:40:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.016 18:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:46.016 18:40:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.016 18:40:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.016 18:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:46.016 18:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:46.016 18:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:46.016 18:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:09:46.016 18:40:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.016 18:40:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.016 18:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:46.017 18:40:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.282 18:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:46.282 18:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:46.282 18:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:46.282 18:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:46.282 18:40:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.282 18:40:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.282 [2024-12-15 18:40:46.485356] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:46.282 18:40:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.282 18:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=bc35ac11-97ba-4c2a-97c5-c3417666e196 00:09:46.282 18:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z bc35ac11-97ba-4c2a-97c5-c3417666e196 ']' 00:09:46.282 18:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:46.282 18:40:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.282 18:40:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.282 [2024-12-15 18:40:46.520993] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:46.282 [2024-12-15 18:40:46.521040] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:46.282 [2024-12-15 18:40:46.521126] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:46.282 [2024-12-15 18:40:46.521199] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:46.282 [2024-12-15 18:40:46.521225] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:09:46.282 18:40:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.282 18:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:46.282 18:40:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.282 18:40:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.282 18:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:46.282 18:40:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.282 18:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:46.282 18:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:46.282 18:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:46.282 18:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:46.282 18:40:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.282 18:40:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.282 18:40:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.282 18:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:46.282 18:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:46.282 18:40:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.282 18:40:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.282 18:40:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.282 18:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:46.282 18:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:09:46.282 18:40:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.282 18:40:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.282 18:40:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.282 18:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:46.282 18:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:09:46.282 18:40:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.282 18:40:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.282 18:40:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.282 18:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:46.282 18:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:46.282 18:40:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.282 18:40:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.282 18:40:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.282 18:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:46.282 18:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:09:46.282 18:40:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:09:46.282 18:40:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:09:46.282 18:40:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:09:46.282 18:40:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:46.282 18:40:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:09:46.282 18:40:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:46.282 18:40:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:09:46.282 18:40:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.282 18:40:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.282 [2024-12-15 18:40:46.680778] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:46.282 [2024-12-15 18:40:46.682625] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:46.282 [2024-12-15 18:40:46.682682] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:09:46.282 [2024-12-15 18:40:46.682711] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:09:46.282 [2024-12-15 18:40:46.682756] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:46.282 [2024-12-15 18:40:46.682813] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:46.282 [2024-12-15 18:40:46.682835] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:09:46.282 [2024-12-15 18:40:46.682862] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:09:46.282 [2024-12-15 18:40:46.682878] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:46.282 [2024-12-15 18:40:46.682888] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:09:46.282 request: 00:09:46.283 { 00:09:46.283 "name": "raid_bdev1", 00:09:46.283 "raid_level": "concat", 00:09:46.283 "base_bdevs": [ 00:09:46.283 "malloc1", 00:09:46.283 "malloc2", 00:09:46.283 "malloc3", 00:09:46.283 "malloc4" 00:09:46.283 ], 00:09:46.283 "strip_size_kb": 64, 00:09:46.283 "superblock": false, 00:09:46.283 "method": "bdev_raid_create", 00:09:46.283 "req_id": 1 00:09:46.283 } 00:09:46.283 Got JSON-RPC error response 00:09:46.283 response: 00:09:46.283 { 00:09:46.283 "code": -17, 00:09:46.283 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:46.283 } 00:09:46.283 18:40:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:46.283 18:40:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:09:46.283 18:40:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:46.283 18:40:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:46.283 18:40:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:46.283 18:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:46.283 18:40:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.283 18:40:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.283 18:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:46.283 18:40:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.554 18:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:46.554 18:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:46.554 18:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:46.554 18:40:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.554 18:40:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.554 [2024-12-15 18:40:46.744614] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:46.554 [2024-12-15 18:40:46.744669] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:46.554 [2024-12-15 18:40:46.744690] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:46.554 [2024-12-15 18:40:46.744699] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:46.554 [2024-12-15 18:40:46.746876] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:46.554 [2024-12-15 18:40:46.746911] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:46.554 [2024-12-15 18:40:46.746985] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:46.554 [2024-12-15 18:40:46.747020] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:46.554 pt1 00:09:46.554 18:40:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.554 18:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:09:46.554 18:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:46.554 18:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:46.554 18:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:46.554 18:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:46.554 18:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:46.554 18:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:46.554 18:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:46.554 18:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:46.554 18:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:46.554 18:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:46.554 18:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:46.554 18:40:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.554 18:40:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.554 18:40:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.554 18:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:46.554 "name": "raid_bdev1", 00:09:46.554 "uuid": "bc35ac11-97ba-4c2a-97c5-c3417666e196", 00:09:46.554 "strip_size_kb": 64, 00:09:46.554 "state": "configuring", 00:09:46.554 "raid_level": "concat", 00:09:46.554 "superblock": true, 00:09:46.554 "num_base_bdevs": 4, 00:09:46.554 "num_base_bdevs_discovered": 1, 00:09:46.554 "num_base_bdevs_operational": 4, 00:09:46.554 "base_bdevs_list": [ 00:09:46.554 { 00:09:46.554 "name": "pt1", 00:09:46.554 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:46.554 "is_configured": true, 00:09:46.554 "data_offset": 2048, 00:09:46.554 "data_size": 63488 00:09:46.554 }, 00:09:46.554 { 00:09:46.554 "name": null, 00:09:46.554 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:46.554 "is_configured": false, 00:09:46.554 "data_offset": 2048, 00:09:46.554 "data_size": 63488 00:09:46.554 }, 00:09:46.554 { 00:09:46.554 "name": null, 00:09:46.554 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:46.554 "is_configured": false, 00:09:46.554 "data_offset": 2048, 00:09:46.554 "data_size": 63488 00:09:46.554 }, 00:09:46.554 { 00:09:46.554 "name": null, 00:09:46.554 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:46.554 "is_configured": false, 00:09:46.554 "data_offset": 2048, 00:09:46.554 "data_size": 63488 00:09:46.554 } 00:09:46.554 ] 00:09:46.554 }' 00:09:46.554 18:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:46.554 18:40:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.815 18:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:09:46.815 18:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:46.815 18:40:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.815 18:40:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.815 [2024-12-15 18:40:47.171963] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:46.815 [2024-12-15 18:40:47.172031] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:46.815 [2024-12-15 18:40:47.172053] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:09:46.815 [2024-12-15 18:40:47.172062] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:46.815 [2024-12-15 18:40:47.172477] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:46.815 [2024-12-15 18:40:47.172504] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:46.815 [2024-12-15 18:40:47.172585] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:46.815 [2024-12-15 18:40:47.172612] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:46.815 pt2 00:09:46.815 18:40:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.815 18:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:09:46.815 18:40:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.815 18:40:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.815 [2024-12-15 18:40:47.183939] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:09:46.815 18:40:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.815 18:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:09:46.815 18:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:46.815 18:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:46.815 18:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:46.815 18:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:46.815 18:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:46.815 18:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:46.815 18:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:46.815 18:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:46.815 18:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:46.815 18:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:46.815 18:40:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.815 18:40:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.815 18:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:46.815 18:40:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.815 18:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:46.815 "name": "raid_bdev1", 00:09:46.815 "uuid": "bc35ac11-97ba-4c2a-97c5-c3417666e196", 00:09:46.815 "strip_size_kb": 64, 00:09:46.815 "state": "configuring", 00:09:46.815 "raid_level": "concat", 00:09:46.815 "superblock": true, 00:09:46.815 "num_base_bdevs": 4, 00:09:46.815 "num_base_bdevs_discovered": 1, 00:09:46.815 "num_base_bdevs_operational": 4, 00:09:46.815 "base_bdevs_list": [ 00:09:46.815 { 00:09:46.815 "name": "pt1", 00:09:46.815 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:46.815 "is_configured": true, 00:09:46.815 "data_offset": 2048, 00:09:46.815 "data_size": 63488 00:09:46.815 }, 00:09:46.815 { 00:09:46.815 "name": null, 00:09:46.815 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:46.815 "is_configured": false, 00:09:46.816 "data_offset": 0, 00:09:46.816 "data_size": 63488 00:09:46.816 }, 00:09:46.816 { 00:09:46.816 "name": null, 00:09:46.816 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:46.816 "is_configured": false, 00:09:46.816 "data_offset": 2048, 00:09:46.816 "data_size": 63488 00:09:46.816 }, 00:09:46.816 { 00:09:46.816 "name": null, 00:09:46.816 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:46.816 "is_configured": false, 00:09:46.816 "data_offset": 2048, 00:09:46.816 "data_size": 63488 00:09:46.816 } 00:09:46.816 ] 00:09:46.816 }' 00:09:46.816 18:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:46.816 18:40:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.385 18:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:47.385 18:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:47.385 18:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:47.385 18:40:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.385 18:40:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.385 [2024-12-15 18:40:47.643135] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:47.385 [2024-12-15 18:40:47.643205] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:47.385 [2024-12-15 18:40:47.643224] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:09:47.385 [2024-12-15 18:40:47.643234] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:47.385 [2024-12-15 18:40:47.643634] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:47.385 [2024-12-15 18:40:47.643662] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:47.385 [2024-12-15 18:40:47.643735] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:47.385 [2024-12-15 18:40:47.643762] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:47.385 pt2 00:09:47.385 18:40:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.385 18:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:47.385 18:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:47.385 18:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:47.385 18:40:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.385 18:40:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.386 [2024-12-15 18:40:47.655078] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:47.386 [2024-12-15 18:40:47.655126] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:47.386 [2024-12-15 18:40:47.655142] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:09:47.386 [2024-12-15 18:40:47.655152] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:47.386 [2024-12-15 18:40:47.655451] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:47.386 [2024-12-15 18:40:47.655478] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:47.386 [2024-12-15 18:40:47.655529] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:47.386 [2024-12-15 18:40:47.655554] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:47.386 pt3 00:09:47.386 18:40:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.386 18:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:47.386 18:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:47.386 18:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:09:47.386 18:40:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.386 18:40:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.386 [2024-12-15 18:40:47.667040] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:09:47.386 [2024-12-15 18:40:47.667086] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:47.386 [2024-12-15 18:40:47.667099] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:09:47.386 [2024-12-15 18:40:47.667108] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:47.386 [2024-12-15 18:40:47.667392] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:47.386 [2024-12-15 18:40:47.667418] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:09:47.386 [2024-12-15 18:40:47.667469] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:09:47.386 [2024-12-15 18:40:47.667487] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:09:47.386 [2024-12-15 18:40:47.667578] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:09:47.386 [2024-12-15 18:40:47.667598] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:09:47.386 [2024-12-15 18:40:47.667827] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:47.386 [2024-12-15 18:40:47.667946] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:09:47.386 [2024-12-15 18:40:47.667962] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:09:47.386 [2024-12-15 18:40:47.668056] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:47.386 pt4 00:09:47.386 18:40:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.386 18:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:47.386 18:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:47.386 18:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:09:47.386 18:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:47.386 18:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:47.386 18:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:47.386 18:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:47.386 18:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:47.386 18:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:47.386 18:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:47.386 18:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:47.386 18:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:47.386 18:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:47.386 18:40:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.386 18:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:47.386 18:40:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.386 18:40:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.386 18:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:47.386 "name": "raid_bdev1", 00:09:47.386 "uuid": "bc35ac11-97ba-4c2a-97c5-c3417666e196", 00:09:47.386 "strip_size_kb": 64, 00:09:47.386 "state": "online", 00:09:47.386 "raid_level": "concat", 00:09:47.386 "superblock": true, 00:09:47.386 "num_base_bdevs": 4, 00:09:47.386 "num_base_bdevs_discovered": 4, 00:09:47.386 "num_base_bdevs_operational": 4, 00:09:47.386 "base_bdevs_list": [ 00:09:47.386 { 00:09:47.386 "name": "pt1", 00:09:47.386 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:47.386 "is_configured": true, 00:09:47.386 "data_offset": 2048, 00:09:47.386 "data_size": 63488 00:09:47.386 }, 00:09:47.386 { 00:09:47.386 "name": "pt2", 00:09:47.386 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:47.386 "is_configured": true, 00:09:47.386 "data_offset": 2048, 00:09:47.386 "data_size": 63488 00:09:47.386 }, 00:09:47.386 { 00:09:47.386 "name": "pt3", 00:09:47.386 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:47.386 "is_configured": true, 00:09:47.386 "data_offset": 2048, 00:09:47.386 "data_size": 63488 00:09:47.386 }, 00:09:47.386 { 00:09:47.386 "name": "pt4", 00:09:47.386 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:47.386 "is_configured": true, 00:09:47.386 "data_offset": 2048, 00:09:47.386 "data_size": 63488 00:09:47.386 } 00:09:47.386 ] 00:09:47.386 }' 00:09:47.386 18:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:47.386 18:40:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.957 18:40:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:47.957 18:40:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:47.957 18:40:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:47.957 18:40:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:47.957 18:40:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:47.957 18:40:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:47.957 18:40:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:47.957 18:40:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:47.957 18:40:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.957 18:40:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.957 [2024-12-15 18:40:48.134565] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:47.957 18:40:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.957 18:40:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:47.957 "name": "raid_bdev1", 00:09:47.957 "aliases": [ 00:09:47.957 "bc35ac11-97ba-4c2a-97c5-c3417666e196" 00:09:47.957 ], 00:09:47.957 "product_name": "Raid Volume", 00:09:47.957 "block_size": 512, 00:09:47.957 "num_blocks": 253952, 00:09:47.957 "uuid": "bc35ac11-97ba-4c2a-97c5-c3417666e196", 00:09:47.957 "assigned_rate_limits": { 00:09:47.957 "rw_ios_per_sec": 0, 00:09:47.957 "rw_mbytes_per_sec": 0, 00:09:47.957 "r_mbytes_per_sec": 0, 00:09:47.957 "w_mbytes_per_sec": 0 00:09:47.957 }, 00:09:47.957 "claimed": false, 00:09:47.957 "zoned": false, 00:09:47.957 "supported_io_types": { 00:09:47.957 "read": true, 00:09:47.957 "write": true, 00:09:47.957 "unmap": true, 00:09:47.957 "flush": true, 00:09:47.957 "reset": true, 00:09:47.957 "nvme_admin": false, 00:09:47.957 "nvme_io": false, 00:09:47.957 "nvme_io_md": false, 00:09:47.957 "write_zeroes": true, 00:09:47.957 "zcopy": false, 00:09:47.957 "get_zone_info": false, 00:09:47.957 "zone_management": false, 00:09:47.957 "zone_append": false, 00:09:47.957 "compare": false, 00:09:47.957 "compare_and_write": false, 00:09:47.957 "abort": false, 00:09:47.957 "seek_hole": false, 00:09:47.957 "seek_data": false, 00:09:47.957 "copy": false, 00:09:47.957 "nvme_iov_md": false 00:09:47.957 }, 00:09:47.957 "memory_domains": [ 00:09:47.957 { 00:09:47.957 "dma_device_id": "system", 00:09:47.957 "dma_device_type": 1 00:09:47.957 }, 00:09:47.957 { 00:09:47.957 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:47.957 "dma_device_type": 2 00:09:47.957 }, 00:09:47.957 { 00:09:47.957 "dma_device_id": "system", 00:09:47.957 "dma_device_type": 1 00:09:47.957 }, 00:09:47.957 { 00:09:47.957 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:47.957 "dma_device_type": 2 00:09:47.957 }, 00:09:47.957 { 00:09:47.957 "dma_device_id": "system", 00:09:47.957 "dma_device_type": 1 00:09:47.957 }, 00:09:47.957 { 00:09:47.957 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:47.957 "dma_device_type": 2 00:09:47.957 }, 00:09:47.957 { 00:09:47.957 "dma_device_id": "system", 00:09:47.957 "dma_device_type": 1 00:09:47.957 }, 00:09:47.957 { 00:09:47.957 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:47.957 "dma_device_type": 2 00:09:47.957 } 00:09:47.957 ], 00:09:47.957 "driver_specific": { 00:09:47.957 "raid": { 00:09:47.957 "uuid": "bc35ac11-97ba-4c2a-97c5-c3417666e196", 00:09:47.957 "strip_size_kb": 64, 00:09:47.957 "state": "online", 00:09:47.957 "raid_level": "concat", 00:09:47.957 "superblock": true, 00:09:47.957 "num_base_bdevs": 4, 00:09:47.957 "num_base_bdevs_discovered": 4, 00:09:47.957 "num_base_bdevs_operational": 4, 00:09:47.957 "base_bdevs_list": [ 00:09:47.957 { 00:09:47.957 "name": "pt1", 00:09:47.957 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:47.957 "is_configured": true, 00:09:47.957 "data_offset": 2048, 00:09:47.957 "data_size": 63488 00:09:47.957 }, 00:09:47.957 { 00:09:47.957 "name": "pt2", 00:09:47.957 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:47.957 "is_configured": true, 00:09:47.957 "data_offset": 2048, 00:09:47.957 "data_size": 63488 00:09:47.957 }, 00:09:47.957 { 00:09:47.957 "name": "pt3", 00:09:47.957 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:47.957 "is_configured": true, 00:09:47.957 "data_offset": 2048, 00:09:47.957 "data_size": 63488 00:09:47.957 }, 00:09:47.957 { 00:09:47.957 "name": "pt4", 00:09:47.957 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:47.957 "is_configured": true, 00:09:47.957 "data_offset": 2048, 00:09:47.957 "data_size": 63488 00:09:47.957 } 00:09:47.957 ] 00:09:47.957 } 00:09:47.957 } 00:09:47.957 }' 00:09:47.957 18:40:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:47.957 18:40:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:47.957 pt2 00:09:47.957 pt3 00:09:47.957 pt4' 00:09:47.957 18:40:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:47.957 18:40:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:47.957 18:40:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:47.957 18:40:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:47.957 18:40:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.957 18:40:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.957 18:40:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:47.957 18:40:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.957 18:40:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:47.957 18:40:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:47.957 18:40:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:47.957 18:40:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:47.957 18:40:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.957 18:40:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.957 18:40:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:47.957 18:40:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.957 18:40:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:47.957 18:40:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:47.957 18:40:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:47.957 18:40:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:47.957 18:40:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:47.958 18:40:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.958 18:40:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.958 18:40:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.958 18:40:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:47.958 18:40:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:47.958 18:40:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:47.958 18:40:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:09:47.958 18:40:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.958 18:40:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.958 18:40:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:48.218 18:40:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.218 18:40:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:48.218 18:40:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:48.218 18:40:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:48.218 18:40:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.218 18:40:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.218 18:40:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:48.218 [2024-12-15 18:40:48.446058] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:48.218 18:40:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.218 18:40:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' bc35ac11-97ba-4c2a-97c5-c3417666e196 '!=' bc35ac11-97ba-4c2a-97c5-c3417666e196 ']' 00:09:48.218 18:40:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:09:48.218 18:40:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:48.218 18:40:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:48.218 18:40:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 85391 00:09:48.218 18:40:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 85391 ']' 00:09:48.218 18:40:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 85391 00:09:48.218 18:40:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:09:48.218 18:40:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:48.218 18:40:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85391 00:09:48.218 killing process with pid 85391 00:09:48.218 18:40:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:48.218 18:40:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:48.218 18:40:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85391' 00:09:48.218 18:40:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 85391 00:09:48.218 [2024-12-15 18:40:48.522834] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:48.218 [2024-12-15 18:40:48.522950] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:48.218 18:40:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 85391 00:09:48.218 [2024-12-15 18:40:48.523025] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:48.218 [2024-12-15 18:40:48.523037] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:09:48.218 [2024-12-15 18:40:48.568224] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:48.478 18:40:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:48.478 00:09:48.478 real 0m4.044s 00:09:48.478 user 0m6.277s 00:09:48.478 sys 0m0.888s 00:09:48.478 18:40:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:48.478 18:40:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.478 ************************************ 00:09:48.478 END TEST raid_superblock_test 00:09:48.478 ************************************ 00:09:48.478 18:40:48 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:09:48.478 18:40:48 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:48.478 18:40:48 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:48.478 18:40:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:48.478 ************************************ 00:09:48.478 START TEST raid_read_error_test 00:09:48.478 ************************************ 00:09:48.478 18:40:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 read 00:09:48.478 18:40:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:09:48.478 18:40:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:09:48.478 18:40:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:48.478 18:40:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:48.478 18:40:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:48.478 18:40:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:48.478 18:40:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:48.478 18:40:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:48.478 18:40:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:48.478 18:40:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:48.478 18:40:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:48.478 18:40:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:48.478 18:40:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:48.478 18:40:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:48.478 18:40:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:09:48.478 18:40:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:48.478 18:40:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:48.478 18:40:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:09:48.478 18:40:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:48.478 18:40:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:48.478 18:40:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:48.478 18:40:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:48.478 18:40:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:48.478 18:40:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:48.478 18:40:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:09:48.478 18:40:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:48.478 18:40:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:48.478 18:40:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:48.478 18:40:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.wmkiwRWPyE 00:09:48.478 18:40:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=85640 00:09:48.478 18:40:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:48.478 18:40:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 85640 00:09:48.478 18:40:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 85640 ']' 00:09:48.478 18:40:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:48.478 18:40:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:48.478 18:40:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:48.478 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:48.478 18:40:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:48.478 18:40:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.739 [2024-12-15 18:40:48.977836] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:09:48.739 [2024-12-15 18:40:48.978001] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85640 ] 00:09:48.739 [2024-12-15 18:40:49.154530] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:48.999 [2024-12-15 18:40:49.180616] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:48.999 [2024-12-15 18:40:49.223587] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:48.999 [2024-12-15 18:40:49.223626] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:49.569 18:40:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:49.569 18:40:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:49.569 18:40:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:49.569 18:40:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:49.569 18:40:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.569 18:40:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.569 BaseBdev1_malloc 00:09:49.570 18:40:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.570 18:40:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:49.570 18:40:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.570 18:40:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.570 true 00:09:49.570 18:40:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.570 18:40:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:49.570 18:40:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.570 18:40:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.570 [2024-12-15 18:40:49.851214] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:49.570 [2024-12-15 18:40:49.851282] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:49.570 [2024-12-15 18:40:49.851313] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:49.570 [2024-12-15 18:40:49.851322] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:49.570 [2024-12-15 18:40:49.853543] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:49.570 [2024-12-15 18:40:49.853583] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:49.570 BaseBdev1 00:09:49.570 18:40:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.570 18:40:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:49.570 18:40:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:49.570 18:40:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.570 18:40:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.570 BaseBdev2_malloc 00:09:49.570 18:40:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.570 18:40:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:49.570 18:40:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.570 18:40:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.570 true 00:09:49.570 18:40:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.570 18:40:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:49.570 18:40:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.570 18:40:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.570 [2024-12-15 18:40:49.891967] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:49.570 [2024-12-15 18:40:49.892022] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:49.570 [2024-12-15 18:40:49.892042] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:49.570 [2024-12-15 18:40:49.892050] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:49.570 [2024-12-15 18:40:49.894106] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:49.570 [2024-12-15 18:40:49.894143] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:49.570 BaseBdev2 00:09:49.570 18:40:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.570 18:40:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:49.570 18:40:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:49.570 18:40:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.570 18:40:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.570 BaseBdev3_malloc 00:09:49.570 18:40:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.570 18:40:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:49.570 18:40:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.570 18:40:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.570 true 00:09:49.570 18:40:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.570 18:40:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:49.570 18:40:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.570 18:40:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.570 [2024-12-15 18:40:49.932635] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:49.570 [2024-12-15 18:40:49.932682] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:49.570 [2024-12-15 18:40:49.932704] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:49.570 [2024-12-15 18:40:49.932712] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:49.570 [2024-12-15 18:40:49.934701] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:49.570 [2024-12-15 18:40:49.934736] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:49.570 BaseBdev3 00:09:49.570 18:40:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.570 18:40:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:49.570 18:40:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:09:49.570 18:40:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.570 18:40:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.570 BaseBdev4_malloc 00:09:49.570 18:40:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.570 18:40:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:09:49.570 18:40:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.570 18:40:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.570 true 00:09:49.570 18:40:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.570 18:40:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:09:49.570 18:40:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.570 18:40:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.570 [2024-12-15 18:40:49.983824] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:09:49.570 [2024-12-15 18:40:49.983876] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:49.570 [2024-12-15 18:40:49.983898] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:49.570 [2024-12-15 18:40:49.983907] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:49.570 [2024-12-15 18:40:49.986024] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:49.570 [2024-12-15 18:40:49.986063] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:09:49.570 BaseBdev4 00:09:49.570 18:40:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.570 18:40:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:09:49.570 18:40:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.570 18:40:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.570 [2024-12-15 18:40:49.995863] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:49.570 [2024-12-15 18:40:49.997668] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:49.570 [2024-12-15 18:40:49.997774] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:49.570 [2024-12-15 18:40:49.997856] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:49.570 [2024-12-15 18:40:49.998064] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007080 00:09:49.570 [2024-12-15 18:40:49.998083] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:09:49.570 [2024-12-15 18:40:49.998339] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:09:49.570 [2024-12-15 18:40:49.998477] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007080 00:09:49.570 [2024-12-15 18:40:49.998497] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007080 00:09:49.570 [2024-12-15 18:40:49.998627] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:49.570 18:40:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.570 18:40:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:09:49.570 18:40:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:49.570 18:40:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:49.570 18:40:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:49.570 18:40:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:49.570 18:40:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:49.570 18:40:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:49.570 18:40:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:49.570 18:40:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:49.570 18:40:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:49.570 18:40:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:49.571 18:40:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:49.571 18:40:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.831 18:40:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.831 18:40:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.831 18:40:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:49.831 "name": "raid_bdev1", 00:09:49.831 "uuid": "4fdf44a7-abfa-4bc9-b1fc-6db8ce0a308c", 00:09:49.831 "strip_size_kb": 64, 00:09:49.831 "state": "online", 00:09:49.831 "raid_level": "concat", 00:09:49.831 "superblock": true, 00:09:49.831 "num_base_bdevs": 4, 00:09:49.831 "num_base_bdevs_discovered": 4, 00:09:49.831 "num_base_bdevs_operational": 4, 00:09:49.831 "base_bdevs_list": [ 00:09:49.831 { 00:09:49.831 "name": "BaseBdev1", 00:09:49.831 "uuid": "072197ab-5446-5618-8d69-581fc31fb2de", 00:09:49.831 "is_configured": true, 00:09:49.831 "data_offset": 2048, 00:09:49.831 "data_size": 63488 00:09:49.831 }, 00:09:49.831 { 00:09:49.831 "name": "BaseBdev2", 00:09:49.831 "uuid": "dcdc97db-886d-5223-b750-098584c06c04", 00:09:49.831 "is_configured": true, 00:09:49.831 "data_offset": 2048, 00:09:49.831 "data_size": 63488 00:09:49.831 }, 00:09:49.831 { 00:09:49.831 "name": "BaseBdev3", 00:09:49.831 "uuid": "e45a3faf-799f-5340-92ae-922a63fc1575", 00:09:49.831 "is_configured": true, 00:09:49.831 "data_offset": 2048, 00:09:49.831 "data_size": 63488 00:09:49.831 }, 00:09:49.831 { 00:09:49.831 "name": "BaseBdev4", 00:09:49.831 "uuid": "670fe402-f015-586d-83b4-01b6cf107746", 00:09:49.831 "is_configured": true, 00:09:49.831 "data_offset": 2048, 00:09:49.831 "data_size": 63488 00:09:49.831 } 00:09:49.831 ] 00:09:49.831 }' 00:09:49.831 18:40:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:49.831 18:40:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.092 18:40:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:50.092 18:40:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:50.092 [2024-12-15 18:40:50.499371] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:09:51.032 18:40:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:51.032 18:40:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.032 18:40:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.032 18:40:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.032 18:40:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:51.032 18:40:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:09:51.032 18:40:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:09:51.032 18:40:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:09:51.032 18:40:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:51.032 18:40:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:51.032 18:40:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:51.032 18:40:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:51.032 18:40:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:51.032 18:40:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:51.032 18:40:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:51.032 18:40:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:51.032 18:40:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:51.032 18:40:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:51.032 18:40:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.032 18:40:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:51.032 18:40:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.032 18:40:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.032 18:40:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:51.032 "name": "raid_bdev1", 00:09:51.033 "uuid": "4fdf44a7-abfa-4bc9-b1fc-6db8ce0a308c", 00:09:51.033 "strip_size_kb": 64, 00:09:51.033 "state": "online", 00:09:51.033 "raid_level": "concat", 00:09:51.033 "superblock": true, 00:09:51.033 "num_base_bdevs": 4, 00:09:51.033 "num_base_bdevs_discovered": 4, 00:09:51.033 "num_base_bdevs_operational": 4, 00:09:51.033 "base_bdevs_list": [ 00:09:51.033 { 00:09:51.033 "name": "BaseBdev1", 00:09:51.033 "uuid": "072197ab-5446-5618-8d69-581fc31fb2de", 00:09:51.033 "is_configured": true, 00:09:51.033 "data_offset": 2048, 00:09:51.033 "data_size": 63488 00:09:51.033 }, 00:09:51.033 { 00:09:51.033 "name": "BaseBdev2", 00:09:51.033 "uuid": "dcdc97db-886d-5223-b750-098584c06c04", 00:09:51.033 "is_configured": true, 00:09:51.033 "data_offset": 2048, 00:09:51.033 "data_size": 63488 00:09:51.033 }, 00:09:51.033 { 00:09:51.033 "name": "BaseBdev3", 00:09:51.033 "uuid": "e45a3faf-799f-5340-92ae-922a63fc1575", 00:09:51.033 "is_configured": true, 00:09:51.033 "data_offset": 2048, 00:09:51.033 "data_size": 63488 00:09:51.033 }, 00:09:51.033 { 00:09:51.033 "name": "BaseBdev4", 00:09:51.033 "uuid": "670fe402-f015-586d-83b4-01b6cf107746", 00:09:51.033 "is_configured": true, 00:09:51.033 "data_offset": 2048, 00:09:51.033 "data_size": 63488 00:09:51.033 } 00:09:51.033 ] 00:09:51.033 }' 00:09:51.033 18:40:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:51.033 18:40:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.603 18:40:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:51.603 18:40:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.603 18:40:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.603 [2024-12-15 18:40:51.822940] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:51.603 [2024-12-15 18:40:51.822983] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:51.604 [2024-12-15 18:40:51.825503] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:51.604 [2024-12-15 18:40:51.825571] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:51.604 [2024-12-15 18:40:51.825617] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:51.604 [2024-12-15 18:40:51.825627] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state offline 00:09:51.604 { 00:09:51.604 "results": [ 00:09:51.604 { 00:09:51.604 "job": "raid_bdev1", 00:09:51.604 "core_mask": "0x1", 00:09:51.604 "workload": "randrw", 00:09:51.604 "percentage": 50, 00:09:51.604 "status": "finished", 00:09:51.604 "queue_depth": 1, 00:09:51.604 "io_size": 131072, 00:09:51.604 "runtime": 1.324276, 00:09:51.604 "iops": 15903.02927788467, 00:09:51.604 "mibps": 1987.8786597355838, 00:09:51.604 "io_failed": 1, 00:09:51.604 "io_timeout": 0, 00:09:51.604 "avg_latency_us": 87.03798510834302, 00:09:51.604 "min_latency_us": 26.494323144104804, 00:09:51.604 "max_latency_us": 1380.8349344978167 00:09:51.604 } 00:09:51.604 ], 00:09:51.604 "core_count": 1 00:09:51.604 } 00:09:51.604 18:40:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.604 18:40:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 85640 00:09:51.604 18:40:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 85640 ']' 00:09:51.604 18:40:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 85640 00:09:51.604 18:40:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:09:51.604 18:40:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:51.604 18:40:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85640 00:09:51.604 18:40:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:51.604 18:40:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:51.604 killing process with pid 85640 00:09:51.604 18:40:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85640' 00:09:51.604 18:40:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 85640 00:09:51.604 [2024-12-15 18:40:51.869466] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:51.604 18:40:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 85640 00:09:51.604 [2024-12-15 18:40:51.905293] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:51.864 18:40:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.wmkiwRWPyE 00:09:51.864 18:40:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:51.864 18:40:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:51.864 18:40:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.76 00:09:51.864 18:40:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:09:51.864 18:40:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:51.864 18:40:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:51.864 18:40:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.76 != \0\.\0\0 ]] 00:09:51.864 00:09:51.864 real 0m3.269s 00:09:51.864 user 0m4.044s 00:09:51.864 sys 0m0.584s 00:09:51.865 18:40:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:51.865 ************************************ 00:09:51.865 END TEST raid_read_error_test 00:09:51.865 ************************************ 00:09:51.865 18:40:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.865 18:40:52 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:09:51.865 18:40:52 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:51.865 18:40:52 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:51.865 18:40:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:51.865 ************************************ 00:09:51.865 START TEST raid_write_error_test 00:09:51.865 ************************************ 00:09:51.865 18:40:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 write 00:09:51.865 18:40:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:09:51.865 18:40:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:09:51.865 18:40:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:51.865 18:40:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:51.865 18:40:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:51.865 18:40:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:51.865 18:40:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:51.865 18:40:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:51.865 18:40:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:51.865 18:40:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:51.865 18:40:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:51.865 18:40:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:51.865 18:40:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:51.865 18:40:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:51.865 18:40:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:09:51.865 18:40:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:51.865 18:40:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:51.865 18:40:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:09:51.865 18:40:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:51.865 18:40:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:51.865 18:40:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:51.865 18:40:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:51.865 18:40:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:51.865 18:40:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:51.865 18:40:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:09:51.865 18:40:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:51.865 18:40:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:51.865 18:40:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:51.865 18:40:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.ohyY1KHkF6 00:09:51.865 18:40:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=85769 00:09:51.865 18:40:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:51.865 18:40:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 85769 00:09:51.865 18:40:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 85769 ']' 00:09:51.865 18:40:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:51.865 18:40:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:51.865 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:51.865 18:40:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:51.865 18:40:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:51.865 18:40:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.125 [2024-12-15 18:40:52.306975] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:09:52.125 [2024-12-15 18:40:52.307095] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85769 ] 00:09:52.125 [2024-12-15 18:40:52.476444] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:52.125 [2024-12-15 18:40:52.502860] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:52.125 [2024-12-15 18:40:52.545697] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:52.125 [2024-12-15 18:40:52.545741] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:52.695 18:40:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:52.695 18:40:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:52.695 18:40:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:52.695 18:40:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:52.956 18:40:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.956 18:40:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.956 BaseBdev1_malloc 00:09:52.956 18:40:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.956 18:40:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:52.956 18:40:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.956 18:40:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.956 true 00:09:52.956 18:40:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.956 18:40:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:52.956 18:40:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.956 18:40:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.956 [2024-12-15 18:40:53.169356] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:52.956 [2024-12-15 18:40:53.169410] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:52.956 [2024-12-15 18:40:53.169445] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:52.956 [2024-12-15 18:40:53.169457] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:52.956 [2024-12-15 18:40:53.171512] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:52.956 [2024-12-15 18:40:53.171548] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:52.956 BaseBdev1 00:09:52.956 18:40:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.956 18:40:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:52.956 18:40:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:52.956 18:40:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.956 18:40:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.956 BaseBdev2_malloc 00:09:52.956 18:40:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.956 18:40:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:52.956 18:40:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.956 18:40:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.956 true 00:09:52.957 18:40:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.957 18:40:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:52.957 18:40:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.957 18:40:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.957 [2024-12-15 18:40:53.210060] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:52.957 [2024-12-15 18:40:53.210111] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:52.957 [2024-12-15 18:40:53.210130] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:52.957 [2024-12-15 18:40:53.210139] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:52.957 [2024-12-15 18:40:53.212119] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:52.957 [2024-12-15 18:40:53.212155] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:52.957 BaseBdev2 00:09:52.957 18:40:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.957 18:40:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:52.957 18:40:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:52.957 18:40:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.957 18:40:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.957 BaseBdev3_malloc 00:09:52.957 18:40:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.957 18:40:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:52.957 18:40:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.957 18:40:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.957 true 00:09:52.957 18:40:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.957 18:40:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:52.957 18:40:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.957 18:40:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.957 [2024-12-15 18:40:53.250550] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:52.957 [2024-12-15 18:40:53.250601] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:52.957 [2024-12-15 18:40:53.250623] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:52.957 [2024-12-15 18:40:53.250632] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:52.957 [2024-12-15 18:40:53.252665] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:52.957 [2024-12-15 18:40:53.252703] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:52.957 BaseBdev3 00:09:52.957 18:40:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.957 18:40:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:52.957 18:40:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:09:52.957 18:40:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.957 18:40:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.957 BaseBdev4_malloc 00:09:52.957 18:40:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.957 18:40:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:09:52.957 18:40:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.957 18:40:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.957 true 00:09:52.957 18:40:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.957 18:40:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:09:52.957 18:40:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.957 18:40:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.957 [2024-12-15 18:40:53.302929] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:09:52.957 [2024-12-15 18:40:53.302975] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:52.957 [2024-12-15 18:40:53.302995] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:52.957 [2024-12-15 18:40:53.303004] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:52.957 [2024-12-15 18:40:53.304996] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:52.957 [2024-12-15 18:40:53.305030] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:09:52.957 BaseBdev4 00:09:52.957 18:40:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.957 18:40:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:09:52.957 18:40:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.957 18:40:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.957 [2024-12-15 18:40:53.314969] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:52.957 [2024-12-15 18:40:53.316736] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:52.957 [2024-12-15 18:40:53.316837] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:52.957 [2024-12-15 18:40:53.316891] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:52.957 [2024-12-15 18:40:53.317082] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007080 00:09:52.957 [2024-12-15 18:40:53.317101] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:09:52.957 [2024-12-15 18:40:53.317334] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:09:52.957 [2024-12-15 18:40:53.317466] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007080 00:09:52.957 [2024-12-15 18:40:53.317490] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007080 00:09:52.957 [2024-12-15 18:40:53.317619] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:52.957 18:40:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.957 18:40:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:09:52.957 18:40:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:52.957 18:40:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:52.957 18:40:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:52.957 18:40:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:52.957 18:40:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:52.957 18:40:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:52.957 18:40:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:52.957 18:40:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:52.957 18:40:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:52.957 18:40:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:52.957 18:40:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:52.957 18:40:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.957 18:40:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.957 18:40:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.957 18:40:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:52.957 "name": "raid_bdev1", 00:09:52.957 "uuid": "8367ff66-1192-4b37-bab6-86fa84d224ad", 00:09:52.957 "strip_size_kb": 64, 00:09:52.957 "state": "online", 00:09:52.957 "raid_level": "concat", 00:09:52.957 "superblock": true, 00:09:52.957 "num_base_bdevs": 4, 00:09:52.957 "num_base_bdevs_discovered": 4, 00:09:52.957 "num_base_bdevs_operational": 4, 00:09:52.957 "base_bdevs_list": [ 00:09:52.957 { 00:09:52.957 "name": "BaseBdev1", 00:09:52.957 "uuid": "c91a1466-1f99-5ae2-a9e5-e5549e5cf1b9", 00:09:52.957 "is_configured": true, 00:09:52.957 "data_offset": 2048, 00:09:52.957 "data_size": 63488 00:09:52.957 }, 00:09:52.957 { 00:09:52.957 "name": "BaseBdev2", 00:09:52.957 "uuid": "4adf044d-4701-5947-8dd9-0d9724a99e30", 00:09:52.957 "is_configured": true, 00:09:52.957 "data_offset": 2048, 00:09:52.957 "data_size": 63488 00:09:52.957 }, 00:09:52.957 { 00:09:52.957 "name": "BaseBdev3", 00:09:52.957 "uuid": "cf4c4082-d39e-5d65-b1a1-62ab22d8db9b", 00:09:52.957 "is_configured": true, 00:09:52.957 "data_offset": 2048, 00:09:52.957 "data_size": 63488 00:09:52.957 }, 00:09:52.957 { 00:09:52.957 "name": "BaseBdev4", 00:09:52.957 "uuid": "ce97fb34-cdb5-5424-b9e4-95f9dc1ff5f5", 00:09:52.957 "is_configured": true, 00:09:52.957 "data_offset": 2048, 00:09:52.957 "data_size": 63488 00:09:52.957 } 00:09:52.957 ] 00:09:52.957 }' 00:09:52.957 18:40:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:52.958 18:40:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.532 18:40:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:53.532 18:40:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:53.532 [2024-12-15 18:40:53.810418] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:09:54.505 18:40:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:54.505 18:40:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.505 18:40:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.505 18:40:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.505 18:40:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:54.505 18:40:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:09:54.505 18:40:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:09:54.505 18:40:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:09:54.505 18:40:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:54.505 18:40:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:54.505 18:40:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:54.505 18:40:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:54.505 18:40:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:54.505 18:40:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:54.505 18:40:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:54.505 18:40:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:54.505 18:40:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:54.505 18:40:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:54.505 18:40:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:54.505 18:40:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.505 18:40:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.505 18:40:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.505 18:40:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:54.505 "name": "raid_bdev1", 00:09:54.505 "uuid": "8367ff66-1192-4b37-bab6-86fa84d224ad", 00:09:54.505 "strip_size_kb": 64, 00:09:54.505 "state": "online", 00:09:54.505 "raid_level": "concat", 00:09:54.505 "superblock": true, 00:09:54.505 "num_base_bdevs": 4, 00:09:54.505 "num_base_bdevs_discovered": 4, 00:09:54.505 "num_base_bdevs_operational": 4, 00:09:54.505 "base_bdevs_list": [ 00:09:54.505 { 00:09:54.505 "name": "BaseBdev1", 00:09:54.505 "uuid": "c91a1466-1f99-5ae2-a9e5-e5549e5cf1b9", 00:09:54.505 "is_configured": true, 00:09:54.505 "data_offset": 2048, 00:09:54.505 "data_size": 63488 00:09:54.505 }, 00:09:54.505 { 00:09:54.505 "name": "BaseBdev2", 00:09:54.505 "uuid": "4adf044d-4701-5947-8dd9-0d9724a99e30", 00:09:54.505 "is_configured": true, 00:09:54.505 "data_offset": 2048, 00:09:54.505 "data_size": 63488 00:09:54.505 }, 00:09:54.505 { 00:09:54.505 "name": "BaseBdev3", 00:09:54.505 "uuid": "cf4c4082-d39e-5d65-b1a1-62ab22d8db9b", 00:09:54.505 "is_configured": true, 00:09:54.505 "data_offset": 2048, 00:09:54.505 "data_size": 63488 00:09:54.505 }, 00:09:54.505 { 00:09:54.505 "name": "BaseBdev4", 00:09:54.505 "uuid": "ce97fb34-cdb5-5424-b9e4-95f9dc1ff5f5", 00:09:54.505 "is_configured": true, 00:09:54.505 "data_offset": 2048, 00:09:54.505 "data_size": 63488 00:09:54.505 } 00:09:54.505 ] 00:09:54.505 }' 00:09:54.505 18:40:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:54.505 18:40:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.768 18:40:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:54.768 18:40:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.768 18:40:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.768 [2024-12-15 18:40:55.178185] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:54.768 [2024-12-15 18:40:55.178220] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:54.768 [2024-12-15 18:40:55.180746] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:54.768 [2024-12-15 18:40:55.180814] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:54.768 [2024-12-15 18:40:55.180860] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:54.768 [2024-12-15 18:40:55.180876] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state offline 00:09:54.768 { 00:09:54.768 "results": [ 00:09:54.768 { 00:09:54.768 "job": "raid_bdev1", 00:09:54.768 "core_mask": "0x1", 00:09:54.768 "workload": "randrw", 00:09:54.768 "percentage": 50, 00:09:54.768 "status": "finished", 00:09:54.768 "queue_depth": 1, 00:09:54.768 "io_size": 131072, 00:09:54.768 "runtime": 1.368724, 00:09:54.768 "iops": 16166.15183192521, 00:09:54.768 "mibps": 2020.7689789906512, 00:09:54.768 "io_failed": 1, 00:09:54.768 "io_timeout": 0, 00:09:54.768 "avg_latency_us": 85.50639423820755, 00:09:54.768 "min_latency_us": 25.041048034934498, 00:09:54.768 "max_latency_us": 1480.9991266375546 00:09:54.768 } 00:09:54.768 ], 00:09:54.768 "core_count": 1 00:09:54.768 } 00:09:54.768 18:40:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.768 18:40:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 85769 00:09:54.768 18:40:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 85769 ']' 00:09:54.768 18:40:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 85769 00:09:54.768 18:40:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:09:54.768 18:40:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:54.768 18:40:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85769 00:09:55.030 18:40:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:55.030 18:40:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:55.030 killing process with pid 85769 00:09:55.030 18:40:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85769' 00:09:55.030 18:40:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 85769 00:09:55.030 [2024-12-15 18:40:55.222944] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:55.030 18:40:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 85769 00:09:55.030 [2024-12-15 18:40:55.258779] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:55.030 18:40:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:55.030 18:40:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.ohyY1KHkF6 00:09:55.030 18:40:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:55.290 18:40:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:09:55.290 18:40:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:09:55.290 18:40:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:55.290 18:40:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:55.290 18:40:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:09:55.290 00:09:55.290 real 0m3.274s 00:09:55.290 user 0m4.099s 00:09:55.290 sys 0m0.535s 00:09:55.290 18:40:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:55.290 18:40:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.290 ************************************ 00:09:55.290 END TEST raid_write_error_test 00:09:55.290 ************************************ 00:09:55.290 18:40:55 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:55.290 18:40:55 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:09:55.290 18:40:55 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:55.290 18:40:55 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:55.290 18:40:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:55.290 ************************************ 00:09:55.290 START TEST raid_state_function_test 00:09:55.290 ************************************ 00:09:55.290 18:40:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 false 00:09:55.290 18:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:09:55.290 18:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:09:55.290 18:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:55.290 18:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:55.290 18:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:55.290 18:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:55.290 18:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:55.290 18:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:55.290 18:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:55.290 18:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:55.290 18:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:55.290 18:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:55.290 18:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:55.290 18:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:55.290 18:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:55.290 18:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:09:55.290 18:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:55.290 18:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:55.290 18:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:09:55.290 18:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:55.290 18:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:55.290 18:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:55.290 18:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:55.290 18:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:55.290 18:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:09:55.290 18:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:09:55.291 18:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:55.291 18:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:55.291 18:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=85896 00:09:55.291 18:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:55.291 18:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 85896' 00:09:55.291 Process raid pid: 85896 00:09:55.291 18:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 85896 00:09:55.291 18:40:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 85896 ']' 00:09:55.291 18:40:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:55.291 18:40:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:55.291 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:55.291 18:40:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:55.291 18:40:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:55.291 18:40:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.291 [2024-12-15 18:40:55.635780] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:09:55.291 [2024-12-15 18:40:55.635908] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:55.551 [2024-12-15 18:40:55.807875] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:55.551 [2024-12-15 18:40:55.835112] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:55.551 [2024-12-15 18:40:55.878292] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:55.551 [2024-12-15 18:40:55.878332] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:56.121 18:40:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:56.121 18:40:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:09:56.121 18:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:56.121 18:40:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.121 18:40:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.121 [2024-12-15 18:40:56.485464] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:56.121 [2024-12-15 18:40:56.485530] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:56.121 [2024-12-15 18:40:56.485541] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:56.121 [2024-12-15 18:40:56.485551] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:56.121 [2024-12-15 18:40:56.485557] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:56.121 [2024-12-15 18:40:56.485568] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:56.121 [2024-12-15 18:40:56.485574] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:56.121 [2024-12-15 18:40:56.485583] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:56.121 18:40:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.121 18:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:09:56.121 18:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:56.121 18:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:56.121 18:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:56.121 18:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:56.121 18:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:56.121 18:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:56.121 18:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:56.121 18:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:56.121 18:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:56.121 18:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:56.121 18:40:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.121 18:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:56.121 18:40:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.121 18:40:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.121 18:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:56.121 "name": "Existed_Raid", 00:09:56.121 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:56.121 "strip_size_kb": 0, 00:09:56.121 "state": "configuring", 00:09:56.121 "raid_level": "raid1", 00:09:56.121 "superblock": false, 00:09:56.121 "num_base_bdevs": 4, 00:09:56.121 "num_base_bdevs_discovered": 0, 00:09:56.121 "num_base_bdevs_operational": 4, 00:09:56.121 "base_bdevs_list": [ 00:09:56.121 { 00:09:56.121 "name": "BaseBdev1", 00:09:56.121 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:56.121 "is_configured": false, 00:09:56.121 "data_offset": 0, 00:09:56.121 "data_size": 0 00:09:56.121 }, 00:09:56.121 { 00:09:56.121 "name": "BaseBdev2", 00:09:56.121 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:56.121 "is_configured": false, 00:09:56.121 "data_offset": 0, 00:09:56.121 "data_size": 0 00:09:56.121 }, 00:09:56.121 { 00:09:56.121 "name": "BaseBdev3", 00:09:56.121 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:56.121 "is_configured": false, 00:09:56.121 "data_offset": 0, 00:09:56.121 "data_size": 0 00:09:56.121 }, 00:09:56.121 { 00:09:56.121 "name": "BaseBdev4", 00:09:56.121 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:56.121 "is_configured": false, 00:09:56.121 "data_offset": 0, 00:09:56.121 "data_size": 0 00:09:56.121 } 00:09:56.121 ] 00:09:56.121 }' 00:09:56.121 18:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:56.121 18:40:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.690 18:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:56.690 18:40:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.690 18:40:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.690 [2024-12-15 18:40:56.928609] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:56.690 [2024-12-15 18:40:56.928652] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:09:56.690 18:40:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.690 18:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:56.690 18:40:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.690 18:40:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.690 [2024-12-15 18:40:56.940583] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:56.690 [2024-12-15 18:40:56.940626] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:56.690 [2024-12-15 18:40:56.940634] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:56.690 [2024-12-15 18:40:56.940644] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:56.691 [2024-12-15 18:40:56.940651] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:56.691 [2024-12-15 18:40:56.940659] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:56.691 [2024-12-15 18:40:56.940666] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:56.691 [2024-12-15 18:40:56.940675] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:56.691 18:40:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.691 18:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:56.691 18:40:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.691 18:40:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.691 [2024-12-15 18:40:56.961530] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:56.691 BaseBdev1 00:09:56.691 18:40:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.691 18:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:56.691 18:40:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:56.691 18:40:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:56.691 18:40:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:56.691 18:40:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:56.691 18:40:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:56.691 18:40:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:56.691 18:40:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.691 18:40:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.691 18:40:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.691 18:40:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:56.691 18:40:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.691 18:40:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.691 [ 00:09:56.691 { 00:09:56.691 "name": "BaseBdev1", 00:09:56.691 "aliases": [ 00:09:56.691 "8ecdbd11-d958-4a15-b029-08c0923d795a" 00:09:56.691 ], 00:09:56.691 "product_name": "Malloc disk", 00:09:56.691 "block_size": 512, 00:09:56.691 "num_blocks": 65536, 00:09:56.691 "uuid": "8ecdbd11-d958-4a15-b029-08c0923d795a", 00:09:56.691 "assigned_rate_limits": { 00:09:56.691 "rw_ios_per_sec": 0, 00:09:56.691 "rw_mbytes_per_sec": 0, 00:09:56.691 "r_mbytes_per_sec": 0, 00:09:56.691 "w_mbytes_per_sec": 0 00:09:56.691 }, 00:09:56.691 "claimed": true, 00:09:56.691 "claim_type": "exclusive_write", 00:09:56.691 "zoned": false, 00:09:56.691 "supported_io_types": { 00:09:56.691 "read": true, 00:09:56.691 "write": true, 00:09:56.691 "unmap": true, 00:09:56.691 "flush": true, 00:09:56.691 "reset": true, 00:09:56.691 "nvme_admin": false, 00:09:56.691 "nvme_io": false, 00:09:56.691 "nvme_io_md": false, 00:09:56.691 "write_zeroes": true, 00:09:56.691 "zcopy": true, 00:09:56.691 "get_zone_info": false, 00:09:56.691 "zone_management": false, 00:09:56.691 "zone_append": false, 00:09:56.691 "compare": false, 00:09:56.691 "compare_and_write": false, 00:09:56.691 "abort": true, 00:09:56.691 "seek_hole": false, 00:09:56.691 "seek_data": false, 00:09:56.691 "copy": true, 00:09:56.691 "nvme_iov_md": false 00:09:56.691 }, 00:09:56.691 "memory_domains": [ 00:09:56.691 { 00:09:56.691 "dma_device_id": "system", 00:09:56.691 "dma_device_type": 1 00:09:56.691 }, 00:09:56.691 { 00:09:56.691 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:56.691 "dma_device_type": 2 00:09:56.691 } 00:09:56.691 ], 00:09:56.691 "driver_specific": {} 00:09:56.691 } 00:09:56.691 ] 00:09:56.691 18:40:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.691 18:40:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:56.691 18:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:09:56.691 18:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:56.691 18:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:56.691 18:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:56.691 18:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:56.691 18:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:56.691 18:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:56.691 18:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:56.691 18:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:56.691 18:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:56.691 18:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:56.691 18:40:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.691 18:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:56.691 18:40:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.691 18:40:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.691 18:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:56.691 "name": "Existed_Raid", 00:09:56.691 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:56.691 "strip_size_kb": 0, 00:09:56.691 "state": "configuring", 00:09:56.691 "raid_level": "raid1", 00:09:56.691 "superblock": false, 00:09:56.691 "num_base_bdevs": 4, 00:09:56.691 "num_base_bdevs_discovered": 1, 00:09:56.691 "num_base_bdevs_operational": 4, 00:09:56.691 "base_bdevs_list": [ 00:09:56.691 { 00:09:56.691 "name": "BaseBdev1", 00:09:56.691 "uuid": "8ecdbd11-d958-4a15-b029-08c0923d795a", 00:09:56.691 "is_configured": true, 00:09:56.691 "data_offset": 0, 00:09:56.691 "data_size": 65536 00:09:56.691 }, 00:09:56.691 { 00:09:56.691 "name": "BaseBdev2", 00:09:56.691 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:56.691 "is_configured": false, 00:09:56.691 "data_offset": 0, 00:09:56.691 "data_size": 0 00:09:56.691 }, 00:09:56.691 { 00:09:56.691 "name": "BaseBdev3", 00:09:56.691 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:56.691 "is_configured": false, 00:09:56.691 "data_offset": 0, 00:09:56.691 "data_size": 0 00:09:56.691 }, 00:09:56.691 { 00:09:56.691 "name": "BaseBdev4", 00:09:56.691 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:56.691 "is_configured": false, 00:09:56.691 "data_offset": 0, 00:09:56.691 "data_size": 0 00:09:56.691 } 00:09:56.691 ] 00:09:56.691 }' 00:09:56.691 18:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:56.691 18:40:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.260 18:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:57.260 18:40:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.261 18:40:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.261 [2024-12-15 18:40:57.468730] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:57.261 [2024-12-15 18:40:57.468793] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:09:57.261 18:40:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.261 18:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:57.261 18:40:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.261 18:40:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.261 [2024-12-15 18:40:57.480713] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:57.261 [2024-12-15 18:40:57.482611] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:57.261 [2024-12-15 18:40:57.482653] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:57.261 [2024-12-15 18:40:57.482664] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:57.261 [2024-12-15 18:40:57.482672] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:57.261 [2024-12-15 18:40:57.482678] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:57.261 [2024-12-15 18:40:57.482686] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:57.261 18:40:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.261 18:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:57.261 18:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:57.261 18:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:09:57.261 18:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:57.261 18:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:57.261 18:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:57.261 18:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:57.261 18:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:57.261 18:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:57.261 18:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:57.261 18:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:57.261 18:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:57.261 18:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:57.261 18:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:57.261 18:40:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.261 18:40:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.261 18:40:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.261 18:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:57.261 "name": "Existed_Raid", 00:09:57.261 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:57.261 "strip_size_kb": 0, 00:09:57.261 "state": "configuring", 00:09:57.261 "raid_level": "raid1", 00:09:57.261 "superblock": false, 00:09:57.261 "num_base_bdevs": 4, 00:09:57.261 "num_base_bdevs_discovered": 1, 00:09:57.261 "num_base_bdevs_operational": 4, 00:09:57.261 "base_bdevs_list": [ 00:09:57.261 { 00:09:57.261 "name": "BaseBdev1", 00:09:57.261 "uuid": "8ecdbd11-d958-4a15-b029-08c0923d795a", 00:09:57.261 "is_configured": true, 00:09:57.261 "data_offset": 0, 00:09:57.261 "data_size": 65536 00:09:57.261 }, 00:09:57.261 { 00:09:57.261 "name": "BaseBdev2", 00:09:57.261 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:57.261 "is_configured": false, 00:09:57.261 "data_offset": 0, 00:09:57.261 "data_size": 0 00:09:57.261 }, 00:09:57.261 { 00:09:57.261 "name": "BaseBdev3", 00:09:57.261 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:57.261 "is_configured": false, 00:09:57.261 "data_offset": 0, 00:09:57.261 "data_size": 0 00:09:57.261 }, 00:09:57.261 { 00:09:57.261 "name": "BaseBdev4", 00:09:57.261 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:57.261 "is_configured": false, 00:09:57.261 "data_offset": 0, 00:09:57.261 "data_size": 0 00:09:57.261 } 00:09:57.261 ] 00:09:57.261 }' 00:09:57.261 18:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:57.261 18:40:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.521 18:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:57.521 18:40:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.521 18:40:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.522 [2024-12-15 18:40:57.931100] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:57.522 BaseBdev2 00:09:57.522 18:40:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.522 18:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:57.522 18:40:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:57.522 18:40:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:57.522 18:40:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:57.522 18:40:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:57.522 18:40:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:57.522 18:40:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:57.522 18:40:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.522 18:40:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.522 18:40:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.522 18:40:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:57.522 18:40:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.522 18:40:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.522 [ 00:09:57.522 { 00:09:57.522 "name": "BaseBdev2", 00:09:57.522 "aliases": [ 00:09:57.522 "9a09aa46-8851-4c5c-9792-563d7496ee8c" 00:09:57.522 ], 00:09:57.522 "product_name": "Malloc disk", 00:09:57.522 "block_size": 512, 00:09:57.522 "num_blocks": 65536, 00:09:57.522 "uuid": "9a09aa46-8851-4c5c-9792-563d7496ee8c", 00:09:57.522 "assigned_rate_limits": { 00:09:57.522 "rw_ios_per_sec": 0, 00:09:57.522 "rw_mbytes_per_sec": 0, 00:09:57.522 "r_mbytes_per_sec": 0, 00:09:57.522 "w_mbytes_per_sec": 0 00:09:57.522 }, 00:09:57.522 "claimed": true, 00:09:57.522 "claim_type": "exclusive_write", 00:09:57.522 "zoned": false, 00:09:57.522 "supported_io_types": { 00:09:57.522 "read": true, 00:09:57.522 "write": true, 00:09:57.522 "unmap": true, 00:09:57.522 "flush": true, 00:09:57.522 "reset": true, 00:09:57.522 "nvme_admin": false, 00:09:57.522 "nvme_io": false, 00:09:57.522 "nvme_io_md": false, 00:09:57.522 "write_zeroes": true, 00:09:57.782 "zcopy": true, 00:09:57.782 "get_zone_info": false, 00:09:57.782 "zone_management": false, 00:09:57.782 "zone_append": false, 00:09:57.782 "compare": false, 00:09:57.782 "compare_and_write": false, 00:09:57.782 "abort": true, 00:09:57.782 "seek_hole": false, 00:09:57.782 "seek_data": false, 00:09:57.782 "copy": true, 00:09:57.782 "nvme_iov_md": false 00:09:57.782 }, 00:09:57.782 "memory_domains": [ 00:09:57.782 { 00:09:57.782 "dma_device_id": "system", 00:09:57.782 "dma_device_type": 1 00:09:57.782 }, 00:09:57.782 { 00:09:57.782 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:57.782 "dma_device_type": 2 00:09:57.782 } 00:09:57.782 ], 00:09:57.782 "driver_specific": {} 00:09:57.782 } 00:09:57.782 ] 00:09:57.782 18:40:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.782 18:40:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:57.782 18:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:57.782 18:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:57.782 18:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:09:57.783 18:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:57.783 18:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:57.783 18:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:57.783 18:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:57.783 18:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:57.783 18:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:57.783 18:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:57.783 18:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:57.783 18:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:57.783 18:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:57.783 18:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:57.783 18:40:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.783 18:40:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.783 18:40:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.783 18:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:57.783 "name": "Existed_Raid", 00:09:57.783 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:57.783 "strip_size_kb": 0, 00:09:57.783 "state": "configuring", 00:09:57.783 "raid_level": "raid1", 00:09:57.783 "superblock": false, 00:09:57.783 "num_base_bdevs": 4, 00:09:57.783 "num_base_bdevs_discovered": 2, 00:09:57.783 "num_base_bdevs_operational": 4, 00:09:57.783 "base_bdevs_list": [ 00:09:57.783 { 00:09:57.783 "name": "BaseBdev1", 00:09:57.783 "uuid": "8ecdbd11-d958-4a15-b029-08c0923d795a", 00:09:57.783 "is_configured": true, 00:09:57.783 "data_offset": 0, 00:09:57.783 "data_size": 65536 00:09:57.783 }, 00:09:57.783 { 00:09:57.783 "name": "BaseBdev2", 00:09:57.783 "uuid": "9a09aa46-8851-4c5c-9792-563d7496ee8c", 00:09:57.783 "is_configured": true, 00:09:57.783 "data_offset": 0, 00:09:57.783 "data_size": 65536 00:09:57.783 }, 00:09:57.783 { 00:09:57.783 "name": "BaseBdev3", 00:09:57.783 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:57.783 "is_configured": false, 00:09:57.783 "data_offset": 0, 00:09:57.783 "data_size": 0 00:09:57.783 }, 00:09:57.783 { 00:09:57.783 "name": "BaseBdev4", 00:09:57.783 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:57.783 "is_configured": false, 00:09:57.783 "data_offset": 0, 00:09:57.783 "data_size": 0 00:09:57.783 } 00:09:57.783 ] 00:09:57.783 }' 00:09:57.783 18:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:57.783 18:40:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.043 18:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:58.043 18:40:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.043 18:40:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.043 [2024-12-15 18:40:58.454670] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:58.043 BaseBdev3 00:09:58.043 18:40:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.043 18:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:58.043 18:40:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:58.043 18:40:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:58.043 18:40:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:58.043 18:40:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:58.043 18:40:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:58.043 18:40:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:58.043 18:40:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.043 18:40:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.043 18:40:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.043 18:40:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:58.043 18:40:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.043 18:40:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.302 [ 00:09:58.302 { 00:09:58.302 "name": "BaseBdev3", 00:09:58.302 "aliases": [ 00:09:58.302 "8406c48b-2019-4711-9a4f-b91c29b71152" 00:09:58.302 ], 00:09:58.302 "product_name": "Malloc disk", 00:09:58.302 "block_size": 512, 00:09:58.302 "num_blocks": 65536, 00:09:58.302 "uuid": "8406c48b-2019-4711-9a4f-b91c29b71152", 00:09:58.302 "assigned_rate_limits": { 00:09:58.302 "rw_ios_per_sec": 0, 00:09:58.302 "rw_mbytes_per_sec": 0, 00:09:58.302 "r_mbytes_per_sec": 0, 00:09:58.302 "w_mbytes_per_sec": 0 00:09:58.302 }, 00:09:58.302 "claimed": true, 00:09:58.302 "claim_type": "exclusive_write", 00:09:58.302 "zoned": false, 00:09:58.302 "supported_io_types": { 00:09:58.302 "read": true, 00:09:58.302 "write": true, 00:09:58.302 "unmap": true, 00:09:58.302 "flush": true, 00:09:58.302 "reset": true, 00:09:58.302 "nvme_admin": false, 00:09:58.302 "nvme_io": false, 00:09:58.302 "nvme_io_md": false, 00:09:58.302 "write_zeroes": true, 00:09:58.302 "zcopy": true, 00:09:58.302 "get_zone_info": false, 00:09:58.302 "zone_management": false, 00:09:58.302 "zone_append": false, 00:09:58.302 "compare": false, 00:09:58.302 "compare_and_write": false, 00:09:58.302 "abort": true, 00:09:58.302 "seek_hole": false, 00:09:58.302 "seek_data": false, 00:09:58.302 "copy": true, 00:09:58.302 "nvme_iov_md": false 00:09:58.302 }, 00:09:58.302 "memory_domains": [ 00:09:58.302 { 00:09:58.302 "dma_device_id": "system", 00:09:58.302 "dma_device_type": 1 00:09:58.302 }, 00:09:58.302 { 00:09:58.302 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:58.302 "dma_device_type": 2 00:09:58.302 } 00:09:58.302 ], 00:09:58.302 "driver_specific": {} 00:09:58.302 } 00:09:58.302 ] 00:09:58.302 18:40:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.302 18:40:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:58.302 18:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:58.302 18:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:58.302 18:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:09:58.302 18:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:58.302 18:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:58.302 18:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:58.302 18:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:58.302 18:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:58.302 18:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:58.302 18:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:58.302 18:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:58.302 18:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:58.302 18:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:58.302 18:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.302 18:40:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.302 18:40:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.302 18:40:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.302 18:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:58.302 "name": "Existed_Raid", 00:09:58.302 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.302 "strip_size_kb": 0, 00:09:58.302 "state": "configuring", 00:09:58.302 "raid_level": "raid1", 00:09:58.302 "superblock": false, 00:09:58.302 "num_base_bdevs": 4, 00:09:58.302 "num_base_bdevs_discovered": 3, 00:09:58.302 "num_base_bdevs_operational": 4, 00:09:58.302 "base_bdevs_list": [ 00:09:58.302 { 00:09:58.302 "name": "BaseBdev1", 00:09:58.302 "uuid": "8ecdbd11-d958-4a15-b029-08c0923d795a", 00:09:58.302 "is_configured": true, 00:09:58.302 "data_offset": 0, 00:09:58.303 "data_size": 65536 00:09:58.303 }, 00:09:58.303 { 00:09:58.303 "name": "BaseBdev2", 00:09:58.303 "uuid": "9a09aa46-8851-4c5c-9792-563d7496ee8c", 00:09:58.303 "is_configured": true, 00:09:58.303 "data_offset": 0, 00:09:58.303 "data_size": 65536 00:09:58.303 }, 00:09:58.303 { 00:09:58.303 "name": "BaseBdev3", 00:09:58.303 "uuid": "8406c48b-2019-4711-9a4f-b91c29b71152", 00:09:58.303 "is_configured": true, 00:09:58.303 "data_offset": 0, 00:09:58.303 "data_size": 65536 00:09:58.303 }, 00:09:58.303 { 00:09:58.303 "name": "BaseBdev4", 00:09:58.303 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.303 "is_configured": false, 00:09:58.303 "data_offset": 0, 00:09:58.303 "data_size": 0 00:09:58.303 } 00:09:58.303 ] 00:09:58.303 }' 00:09:58.303 18:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:58.303 18:40:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.562 18:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:09:58.562 18:40:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.562 18:40:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.562 [2024-12-15 18:40:58.949098] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:58.562 [2024-12-15 18:40:58.949231] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:09:58.562 [2024-12-15 18:40:58.949267] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:09:58.562 [2024-12-15 18:40:58.949595] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:58.562 [2024-12-15 18:40:58.949789] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:09:58.562 [2024-12-15 18:40:58.949857] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:09:58.562 [2024-12-15 18:40:58.950116] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:58.562 BaseBdev4 00:09:58.562 18:40:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.562 18:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:09:58.562 18:40:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:09:58.562 18:40:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:58.562 18:40:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:58.562 18:40:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:58.562 18:40:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:58.562 18:40:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:58.562 18:40:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.562 18:40:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.562 18:40:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.562 18:40:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:09:58.562 18:40:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.562 18:40:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.562 [ 00:09:58.562 { 00:09:58.562 "name": "BaseBdev4", 00:09:58.562 "aliases": [ 00:09:58.562 "58eb53a0-fc66-4efb-a48d-cda0a6a46c8d" 00:09:58.562 ], 00:09:58.562 "product_name": "Malloc disk", 00:09:58.562 "block_size": 512, 00:09:58.562 "num_blocks": 65536, 00:09:58.562 "uuid": "58eb53a0-fc66-4efb-a48d-cda0a6a46c8d", 00:09:58.562 "assigned_rate_limits": { 00:09:58.562 "rw_ios_per_sec": 0, 00:09:58.562 "rw_mbytes_per_sec": 0, 00:09:58.562 "r_mbytes_per_sec": 0, 00:09:58.562 "w_mbytes_per_sec": 0 00:09:58.562 }, 00:09:58.562 "claimed": true, 00:09:58.562 "claim_type": "exclusive_write", 00:09:58.562 "zoned": false, 00:09:58.562 "supported_io_types": { 00:09:58.562 "read": true, 00:09:58.562 "write": true, 00:09:58.562 "unmap": true, 00:09:58.562 "flush": true, 00:09:58.562 "reset": true, 00:09:58.562 "nvme_admin": false, 00:09:58.562 "nvme_io": false, 00:09:58.562 "nvme_io_md": false, 00:09:58.562 "write_zeroes": true, 00:09:58.562 "zcopy": true, 00:09:58.562 "get_zone_info": false, 00:09:58.562 "zone_management": false, 00:09:58.562 "zone_append": false, 00:09:58.562 "compare": false, 00:09:58.562 "compare_and_write": false, 00:09:58.562 "abort": true, 00:09:58.562 "seek_hole": false, 00:09:58.562 "seek_data": false, 00:09:58.562 "copy": true, 00:09:58.562 "nvme_iov_md": false 00:09:58.562 }, 00:09:58.562 "memory_domains": [ 00:09:58.562 { 00:09:58.562 "dma_device_id": "system", 00:09:58.562 "dma_device_type": 1 00:09:58.562 }, 00:09:58.562 { 00:09:58.562 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:58.562 "dma_device_type": 2 00:09:58.562 } 00:09:58.562 ], 00:09:58.562 "driver_specific": {} 00:09:58.562 } 00:09:58.562 ] 00:09:58.562 18:40:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.562 18:40:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:58.562 18:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:58.562 18:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:58.562 18:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:09:58.562 18:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:58.562 18:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:58.562 18:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:58.562 18:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:58.562 18:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:58.562 18:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:58.562 18:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:58.562 18:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:58.562 18:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:58.562 18:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.562 18:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:58.562 18:40:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.562 18:40:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.823 18:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.824 18:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:58.824 "name": "Existed_Raid", 00:09:58.824 "uuid": "7a5dc3e9-dc16-45d4-bade-15869a39c052", 00:09:58.824 "strip_size_kb": 0, 00:09:58.824 "state": "online", 00:09:58.824 "raid_level": "raid1", 00:09:58.824 "superblock": false, 00:09:58.824 "num_base_bdevs": 4, 00:09:58.824 "num_base_bdevs_discovered": 4, 00:09:58.824 "num_base_bdevs_operational": 4, 00:09:58.824 "base_bdevs_list": [ 00:09:58.824 { 00:09:58.824 "name": "BaseBdev1", 00:09:58.824 "uuid": "8ecdbd11-d958-4a15-b029-08c0923d795a", 00:09:58.824 "is_configured": true, 00:09:58.824 "data_offset": 0, 00:09:58.824 "data_size": 65536 00:09:58.824 }, 00:09:58.824 { 00:09:58.824 "name": "BaseBdev2", 00:09:58.824 "uuid": "9a09aa46-8851-4c5c-9792-563d7496ee8c", 00:09:58.824 "is_configured": true, 00:09:58.824 "data_offset": 0, 00:09:58.824 "data_size": 65536 00:09:58.824 }, 00:09:58.824 { 00:09:58.824 "name": "BaseBdev3", 00:09:58.824 "uuid": "8406c48b-2019-4711-9a4f-b91c29b71152", 00:09:58.824 "is_configured": true, 00:09:58.824 "data_offset": 0, 00:09:58.824 "data_size": 65536 00:09:58.824 }, 00:09:58.824 { 00:09:58.824 "name": "BaseBdev4", 00:09:58.824 "uuid": "58eb53a0-fc66-4efb-a48d-cda0a6a46c8d", 00:09:58.824 "is_configured": true, 00:09:58.824 "data_offset": 0, 00:09:58.824 "data_size": 65536 00:09:58.824 } 00:09:58.824 ] 00:09:58.824 }' 00:09:58.824 18:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:58.824 18:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.088 18:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:59.088 18:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:59.088 18:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:59.088 18:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:59.088 18:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:59.088 18:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:59.088 18:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:59.088 18:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.088 18:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.088 18:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:59.088 [2024-12-15 18:40:59.452743] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:59.088 18:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.088 18:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:59.088 "name": "Existed_Raid", 00:09:59.088 "aliases": [ 00:09:59.088 "7a5dc3e9-dc16-45d4-bade-15869a39c052" 00:09:59.088 ], 00:09:59.088 "product_name": "Raid Volume", 00:09:59.088 "block_size": 512, 00:09:59.088 "num_blocks": 65536, 00:09:59.088 "uuid": "7a5dc3e9-dc16-45d4-bade-15869a39c052", 00:09:59.088 "assigned_rate_limits": { 00:09:59.088 "rw_ios_per_sec": 0, 00:09:59.088 "rw_mbytes_per_sec": 0, 00:09:59.088 "r_mbytes_per_sec": 0, 00:09:59.088 "w_mbytes_per_sec": 0 00:09:59.088 }, 00:09:59.088 "claimed": false, 00:09:59.088 "zoned": false, 00:09:59.088 "supported_io_types": { 00:09:59.088 "read": true, 00:09:59.088 "write": true, 00:09:59.088 "unmap": false, 00:09:59.088 "flush": false, 00:09:59.088 "reset": true, 00:09:59.088 "nvme_admin": false, 00:09:59.088 "nvme_io": false, 00:09:59.088 "nvme_io_md": false, 00:09:59.088 "write_zeroes": true, 00:09:59.088 "zcopy": false, 00:09:59.088 "get_zone_info": false, 00:09:59.088 "zone_management": false, 00:09:59.088 "zone_append": false, 00:09:59.088 "compare": false, 00:09:59.088 "compare_and_write": false, 00:09:59.088 "abort": false, 00:09:59.088 "seek_hole": false, 00:09:59.088 "seek_data": false, 00:09:59.088 "copy": false, 00:09:59.088 "nvme_iov_md": false 00:09:59.088 }, 00:09:59.088 "memory_domains": [ 00:09:59.088 { 00:09:59.088 "dma_device_id": "system", 00:09:59.088 "dma_device_type": 1 00:09:59.088 }, 00:09:59.088 { 00:09:59.088 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:59.088 "dma_device_type": 2 00:09:59.088 }, 00:09:59.088 { 00:09:59.088 "dma_device_id": "system", 00:09:59.088 "dma_device_type": 1 00:09:59.089 }, 00:09:59.089 { 00:09:59.089 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:59.089 "dma_device_type": 2 00:09:59.089 }, 00:09:59.089 { 00:09:59.089 "dma_device_id": "system", 00:09:59.089 "dma_device_type": 1 00:09:59.089 }, 00:09:59.089 { 00:09:59.089 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:59.089 "dma_device_type": 2 00:09:59.089 }, 00:09:59.089 { 00:09:59.089 "dma_device_id": "system", 00:09:59.089 "dma_device_type": 1 00:09:59.089 }, 00:09:59.089 { 00:09:59.089 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:59.089 "dma_device_type": 2 00:09:59.089 } 00:09:59.089 ], 00:09:59.089 "driver_specific": { 00:09:59.089 "raid": { 00:09:59.089 "uuid": "7a5dc3e9-dc16-45d4-bade-15869a39c052", 00:09:59.089 "strip_size_kb": 0, 00:09:59.089 "state": "online", 00:09:59.089 "raid_level": "raid1", 00:09:59.089 "superblock": false, 00:09:59.089 "num_base_bdevs": 4, 00:09:59.089 "num_base_bdevs_discovered": 4, 00:09:59.089 "num_base_bdevs_operational": 4, 00:09:59.089 "base_bdevs_list": [ 00:09:59.089 { 00:09:59.089 "name": "BaseBdev1", 00:09:59.089 "uuid": "8ecdbd11-d958-4a15-b029-08c0923d795a", 00:09:59.089 "is_configured": true, 00:09:59.089 "data_offset": 0, 00:09:59.089 "data_size": 65536 00:09:59.089 }, 00:09:59.089 { 00:09:59.089 "name": "BaseBdev2", 00:09:59.089 "uuid": "9a09aa46-8851-4c5c-9792-563d7496ee8c", 00:09:59.089 "is_configured": true, 00:09:59.089 "data_offset": 0, 00:09:59.089 "data_size": 65536 00:09:59.089 }, 00:09:59.089 { 00:09:59.089 "name": "BaseBdev3", 00:09:59.089 "uuid": "8406c48b-2019-4711-9a4f-b91c29b71152", 00:09:59.089 "is_configured": true, 00:09:59.089 "data_offset": 0, 00:09:59.089 "data_size": 65536 00:09:59.089 }, 00:09:59.089 { 00:09:59.089 "name": "BaseBdev4", 00:09:59.089 "uuid": "58eb53a0-fc66-4efb-a48d-cda0a6a46c8d", 00:09:59.089 "is_configured": true, 00:09:59.089 "data_offset": 0, 00:09:59.089 "data_size": 65536 00:09:59.089 } 00:09:59.089 ] 00:09:59.089 } 00:09:59.089 } 00:09:59.089 }' 00:09:59.089 18:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:59.349 18:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:59.349 BaseBdev2 00:09:59.349 BaseBdev3 00:09:59.349 BaseBdev4' 00:09:59.349 18:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:59.349 18:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:59.349 18:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:59.349 18:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:59.349 18:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:59.349 18:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.349 18:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.349 18:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.349 18:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:59.349 18:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:59.349 18:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:59.349 18:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:59.349 18:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:59.350 18:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.350 18:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.350 18:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.350 18:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:59.350 18:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:59.350 18:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:59.350 18:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:59.350 18:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:59.350 18:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.350 18:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.350 18:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.350 18:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:59.350 18:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:59.350 18:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:59.350 18:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:09:59.350 18:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.350 18:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.350 18:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:59.350 18:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.350 18:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:59.350 18:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:59.350 18:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:59.350 18:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.350 18:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.350 [2024-12-15 18:40:59.755910] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:59.350 18:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.350 18:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:59.350 18:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:09:59.350 18:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:59.350 18:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:59.350 18:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:09:59.350 18:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:59.350 18:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:59.350 18:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:59.350 18:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:59.350 18:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:59.350 18:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:59.350 18:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:59.350 18:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:59.350 18:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:59.350 18:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:59.350 18:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:59.350 18:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.350 18:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.350 18:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.611 18:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.611 18:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:59.611 "name": "Existed_Raid", 00:09:59.611 "uuid": "7a5dc3e9-dc16-45d4-bade-15869a39c052", 00:09:59.611 "strip_size_kb": 0, 00:09:59.611 "state": "online", 00:09:59.611 "raid_level": "raid1", 00:09:59.611 "superblock": false, 00:09:59.611 "num_base_bdevs": 4, 00:09:59.611 "num_base_bdevs_discovered": 3, 00:09:59.611 "num_base_bdevs_operational": 3, 00:09:59.611 "base_bdevs_list": [ 00:09:59.611 { 00:09:59.611 "name": null, 00:09:59.611 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:59.611 "is_configured": false, 00:09:59.611 "data_offset": 0, 00:09:59.611 "data_size": 65536 00:09:59.611 }, 00:09:59.611 { 00:09:59.611 "name": "BaseBdev2", 00:09:59.611 "uuid": "9a09aa46-8851-4c5c-9792-563d7496ee8c", 00:09:59.611 "is_configured": true, 00:09:59.611 "data_offset": 0, 00:09:59.611 "data_size": 65536 00:09:59.611 }, 00:09:59.611 { 00:09:59.611 "name": "BaseBdev3", 00:09:59.611 "uuid": "8406c48b-2019-4711-9a4f-b91c29b71152", 00:09:59.611 "is_configured": true, 00:09:59.611 "data_offset": 0, 00:09:59.611 "data_size": 65536 00:09:59.611 }, 00:09:59.611 { 00:09:59.611 "name": "BaseBdev4", 00:09:59.611 "uuid": "58eb53a0-fc66-4efb-a48d-cda0a6a46c8d", 00:09:59.611 "is_configured": true, 00:09:59.611 "data_offset": 0, 00:09:59.611 "data_size": 65536 00:09:59.611 } 00:09:59.611 ] 00:09:59.611 }' 00:09:59.611 18:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:59.611 18:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.870 18:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:59.870 18:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:59.870 18:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:59.870 18:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.870 18:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.870 18:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.870 18:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.870 18:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:59.870 18:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:59.870 18:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:59.870 18:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.870 18:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.870 [2024-12-15 18:41:00.290321] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:59.870 18:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.870 18:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:59.870 18:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:59.870 18:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.870 18:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:59.870 18:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.870 18:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.130 18:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.130 18:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:00.130 18:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:00.130 18:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:00.130 18:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.130 18:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.130 [2024-12-15 18:41:00.365509] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:00.130 18:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.130 18:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:00.130 18:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:00.130 18:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.130 18:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:00.130 18:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.130 18:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.130 18:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.130 18:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:00.130 18:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:00.130 18:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:00.130 18:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.130 18:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.130 [2024-12-15 18:41:00.432848] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:00.130 [2024-12-15 18:41:00.432998] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:00.130 [2024-12-15 18:41:00.444873] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:00.130 [2024-12-15 18:41:00.445003] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:00.130 [2024-12-15 18:41:00.445048] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:10:00.130 18:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.130 18:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:00.130 18:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:00.130 18:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.130 18:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.130 18:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.130 18:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:00.130 18:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.130 18:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:00.130 18:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:00.130 18:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:00.130 18:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:00.130 18:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:00.130 18:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:00.130 18:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.130 18:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.130 BaseBdev2 00:10:00.130 18:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.130 18:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:00.130 18:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:00.130 18:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:00.130 18:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:00.130 18:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:00.130 18:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:00.130 18:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:00.130 18:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.130 18:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.130 18:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.130 18:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:00.130 18:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.130 18:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.130 [ 00:10:00.130 { 00:10:00.130 "name": "BaseBdev2", 00:10:00.130 "aliases": [ 00:10:00.130 "d4721225-6c39-441e-a6da-5a63755fe69a" 00:10:00.130 ], 00:10:00.130 "product_name": "Malloc disk", 00:10:00.130 "block_size": 512, 00:10:00.130 "num_blocks": 65536, 00:10:00.130 "uuid": "d4721225-6c39-441e-a6da-5a63755fe69a", 00:10:00.130 "assigned_rate_limits": { 00:10:00.130 "rw_ios_per_sec": 0, 00:10:00.130 "rw_mbytes_per_sec": 0, 00:10:00.130 "r_mbytes_per_sec": 0, 00:10:00.130 "w_mbytes_per_sec": 0 00:10:00.130 }, 00:10:00.130 "claimed": false, 00:10:00.130 "zoned": false, 00:10:00.130 "supported_io_types": { 00:10:00.130 "read": true, 00:10:00.130 "write": true, 00:10:00.130 "unmap": true, 00:10:00.130 "flush": true, 00:10:00.130 "reset": true, 00:10:00.130 "nvme_admin": false, 00:10:00.130 "nvme_io": false, 00:10:00.130 "nvme_io_md": false, 00:10:00.130 "write_zeroes": true, 00:10:00.130 "zcopy": true, 00:10:00.130 "get_zone_info": false, 00:10:00.130 "zone_management": false, 00:10:00.130 "zone_append": false, 00:10:00.130 "compare": false, 00:10:00.130 "compare_and_write": false, 00:10:00.130 "abort": true, 00:10:00.130 "seek_hole": false, 00:10:00.130 "seek_data": false, 00:10:00.130 "copy": true, 00:10:00.130 "nvme_iov_md": false 00:10:00.130 }, 00:10:00.130 "memory_domains": [ 00:10:00.130 { 00:10:00.130 "dma_device_id": "system", 00:10:00.130 "dma_device_type": 1 00:10:00.130 }, 00:10:00.130 { 00:10:00.130 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:00.130 "dma_device_type": 2 00:10:00.130 } 00:10:00.130 ], 00:10:00.130 "driver_specific": {} 00:10:00.130 } 00:10:00.130 ] 00:10:00.130 18:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.130 18:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:00.130 18:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:00.130 18:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:00.130 18:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:00.130 18:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.130 18:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.130 BaseBdev3 00:10:00.130 18:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.130 18:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:00.131 18:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:00.131 18:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:00.131 18:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:00.131 18:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:00.131 18:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:00.131 18:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:00.131 18:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.131 18:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.391 18:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.391 18:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:00.391 18:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.391 18:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.391 [ 00:10:00.391 { 00:10:00.391 "name": "BaseBdev3", 00:10:00.391 "aliases": [ 00:10:00.391 "6f3b8397-9573-40cc-9f93-9b67b900ae95" 00:10:00.391 ], 00:10:00.391 "product_name": "Malloc disk", 00:10:00.391 "block_size": 512, 00:10:00.391 "num_blocks": 65536, 00:10:00.391 "uuid": "6f3b8397-9573-40cc-9f93-9b67b900ae95", 00:10:00.391 "assigned_rate_limits": { 00:10:00.391 "rw_ios_per_sec": 0, 00:10:00.391 "rw_mbytes_per_sec": 0, 00:10:00.391 "r_mbytes_per_sec": 0, 00:10:00.391 "w_mbytes_per_sec": 0 00:10:00.391 }, 00:10:00.391 "claimed": false, 00:10:00.391 "zoned": false, 00:10:00.391 "supported_io_types": { 00:10:00.391 "read": true, 00:10:00.391 "write": true, 00:10:00.391 "unmap": true, 00:10:00.391 "flush": true, 00:10:00.391 "reset": true, 00:10:00.391 "nvme_admin": false, 00:10:00.391 "nvme_io": false, 00:10:00.391 "nvme_io_md": false, 00:10:00.391 "write_zeroes": true, 00:10:00.391 "zcopy": true, 00:10:00.391 "get_zone_info": false, 00:10:00.391 "zone_management": false, 00:10:00.391 "zone_append": false, 00:10:00.391 "compare": false, 00:10:00.391 "compare_and_write": false, 00:10:00.391 "abort": true, 00:10:00.391 "seek_hole": false, 00:10:00.391 "seek_data": false, 00:10:00.391 "copy": true, 00:10:00.391 "nvme_iov_md": false 00:10:00.391 }, 00:10:00.391 "memory_domains": [ 00:10:00.391 { 00:10:00.391 "dma_device_id": "system", 00:10:00.391 "dma_device_type": 1 00:10:00.391 }, 00:10:00.391 { 00:10:00.391 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:00.391 "dma_device_type": 2 00:10:00.391 } 00:10:00.391 ], 00:10:00.391 "driver_specific": {} 00:10:00.391 } 00:10:00.391 ] 00:10:00.391 18:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.391 18:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:00.391 18:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:00.391 18:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:00.391 18:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:00.391 18:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.391 18:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.391 BaseBdev4 00:10:00.391 18:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.391 18:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:00.391 18:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:00.391 18:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:00.391 18:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:00.391 18:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:00.391 18:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:00.391 18:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:00.391 18:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.391 18:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.391 18:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.391 18:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:00.391 18:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.391 18:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.391 [ 00:10:00.391 { 00:10:00.391 "name": "BaseBdev4", 00:10:00.391 "aliases": [ 00:10:00.391 "5c095e55-c396-4af1-ac06-f1b97d49995b" 00:10:00.391 ], 00:10:00.391 "product_name": "Malloc disk", 00:10:00.391 "block_size": 512, 00:10:00.391 "num_blocks": 65536, 00:10:00.392 "uuid": "5c095e55-c396-4af1-ac06-f1b97d49995b", 00:10:00.392 "assigned_rate_limits": { 00:10:00.392 "rw_ios_per_sec": 0, 00:10:00.392 "rw_mbytes_per_sec": 0, 00:10:00.392 "r_mbytes_per_sec": 0, 00:10:00.392 "w_mbytes_per_sec": 0 00:10:00.392 }, 00:10:00.392 "claimed": false, 00:10:00.392 "zoned": false, 00:10:00.392 "supported_io_types": { 00:10:00.392 "read": true, 00:10:00.392 "write": true, 00:10:00.392 "unmap": true, 00:10:00.392 "flush": true, 00:10:00.392 "reset": true, 00:10:00.392 "nvme_admin": false, 00:10:00.392 "nvme_io": false, 00:10:00.392 "nvme_io_md": false, 00:10:00.392 "write_zeroes": true, 00:10:00.392 "zcopy": true, 00:10:00.392 "get_zone_info": false, 00:10:00.392 "zone_management": false, 00:10:00.392 "zone_append": false, 00:10:00.392 "compare": false, 00:10:00.392 "compare_and_write": false, 00:10:00.392 "abort": true, 00:10:00.392 "seek_hole": false, 00:10:00.392 "seek_data": false, 00:10:00.392 "copy": true, 00:10:00.392 "nvme_iov_md": false 00:10:00.392 }, 00:10:00.392 "memory_domains": [ 00:10:00.392 { 00:10:00.392 "dma_device_id": "system", 00:10:00.392 "dma_device_type": 1 00:10:00.392 }, 00:10:00.392 { 00:10:00.392 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:00.392 "dma_device_type": 2 00:10:00.392 } 00:10:00.392 ], 00:10:00.392 "driver_specific": {} 00:10:00.392 } 00:10:00.392 ] 00:10:00.392 18:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.392 18:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:00.392 18:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:00.392 18:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:00.392 18:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:00.392 18:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.392 18:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.392 [2024-12-15 18:41:00.653996] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:00.392 [2024-12-15 18:41:00.654113] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:00.392 [2024-12-15 18:41:00.654168] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:00.392 [2024-12-15 18:41:00.656366] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:00.392 [2024-12-15 18:41:00.656463] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:00.392 18:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.392 18:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:00.392 18:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:00.392 18:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:00.392 18:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:00.392 18:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:00.392 18:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:00.392 18:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:00.392 18:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:00.392 18:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:00.392 18:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:00.392 18:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.392 18:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:00.392 18:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.392 18:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.392 18:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.392 18:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:00.392 "name": "Existed_Raid", 00:10:00.392 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:00.392 "strip_size_kb": 0, 00:10:00.392 "state": "configuring", 00:10:00.392 "raid_level": "raid1", 00:10:00.392 "superblock": false, 00:10:00.392 "num_base_bdevs": 4, 00:10:00.392 "num_base_bdevs_discovered": 3, 00:10:00.392 "num_base_bdevs_operational": 4, 00:10:00.392 "base_bdevs_list": [ 00:10:00.392 { 00:10:00.392 "name": "BaseBdev1", 00:10:00.392 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:00.392 "is_configured": false, 00:10:00.392 "data_offset": 0, 00:10:00.392 "data_size": 0 00:10:00.392 }, 00:10:00.392 { 00:10:00.392 "name": "BaseBdev2", 00:10:00.392 "uuid": "d4721225-6c39-441e-a6da-5a63755fe69a", 00:10:00.392 "is_configured": true, 00:10:00.392 "data_offset": 0, 00:10:00.392 "data_size": 65536 00:10:00.392 }, 00:10:00.392 { 00:10:00.392 "name": "BaseBdev3", 00:10:00.392 "uuid": "6f3b8397-9573-40cc-9f93-9b67b900ae95", 00:10:00.392 "is_configured": true, 00:10:00.392 "data_offset": 0, 00:10:00.392 "data_size": 65536 00:10:00.392 }, 00:10:00.392 { 00:10:00.392 "name": "BaseBdev4", 00:10:00.392 "uuid": "5c095e55-c396-4af1-ac06-f1b97d49995b", 00:10:00.392 "is_configured": true, 00:10:00.392 "data_offset": 0, 00:10:00.392 "data_size": 65536 00:10:00.392 } 00:10:00.392 ] 00:10:00.392 }' 00:10:00.392 18:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:00.392 18:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.962 18:41:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:00.962 18:41:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.962 18:41:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.962 [2024-12-15 18:41:01.117194] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:00.962 18:41:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.962 18:41:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:00.962 18:41:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:00.962 18:41:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:00.962 18:41:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:00.962 18:41:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:00.962 18:41:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:00.962 18:41:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:00.962 18:41:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:00.962 18:41:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:00.962 18:41:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:00.962 18:41:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.962 18:41:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:00.962 18:41:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.962 18:41:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.962 18:41:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.962 18:41:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:00.962 "name": "Existed_Raid", 00:10:00.962 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:00.962 "strip_size_kb": 0, 00:10:00.962 "state": "configuring", 00:10:00.962 "raid_level": "raid1", 00:10:00.962 "superblock": false, 00:10:00.962 "num_base_bdevs": 4, 00:10:00.962 "num_base_bdevs_discovered": 2, 00:10:00.962 "num_base_bdevs_operational": 4, 00:10:00.962 "base_bdevs_list": [ 00:10:00.962 { 00:10:00.962 "name": "BaseBdev1", 00:10:00.962 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:00.962 "is_configured": false, 00:10:00.962 "data_offset": 0, 00:10:00.962 "data_size": 0 00:10:00.962 }, 00:10:00.962 { 00:10:00.962 "name": null, 00:10:00.962 "uuid": "d4721225-6c39-441e-a6da-5a63755fe69a", 00:10:00.962 "is_configured": false, 00:10:00.962 "data_offset": 0, 00:10:00.962 "data_size": 65536 00:10:00.962 }, 00:10:00.962 { 00:10:00.962 "name": "BaseBdev3", 00:10:00.962 "uuid": "6f3b8397-9573-40cc-9f93-9b67b900ae95", 00:10:00.962 "is_configured": true, 00:10:00.962 "data_offset": 0, 00:10:00.962 "data_size": 65536 00:10:00.962 }, 00:10:00.962 { 00:10:00.962 "name": "BaseBdev4", 00:10:00.962 "uuid": "5c095e55-c396-4af1-ac06-f1b97d49995b", 00:10:00.962 "is_configured": true, 00:10:00.962 "data_offset": 0, 00:10:00.962 "data_size": 65536 00:10:00.962 } 00:10:00.962 ] 00:10:00.962 }' 00:10:00.962 18:41:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:00.962 18:41:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.223 18:41:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.223 18:41:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.223 18:41:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.223 18:41:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:01.223 18:41:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.223 18:41:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:01.223 18:41:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:01.223 18:41:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.223 18:41:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.223 [2024-12-15 18:41:01.647385] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:01.223 BaseBdev1 00:10:01.223 18:41:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.223 18:41:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:01.223 18:41:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:01.223 18:41:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:01.223 18:41:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:01.223 18:41:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:01.223 18:41:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:01.223 18:41:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:01.223 18:41:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.223 18:41:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.223 18:41:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.223 18:41:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:01.223 18:41:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.223 18:41:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.483 [ 00:10:01.483 { 00:10:01.483 "name": "BaseBdev1", 00:10:01.483 "aliases": [ 00:10:01.483 "43022c57-bea7-4cfb-af44-2799c355d528" 00:10:01.483 ], 00:10:01.483 "product_name": "Malloc disk", 00:10:01.483 "block_size": 512, 00:10:01.483 "num_blocks": 65536, 00:10:01.483 "uuid": "43022c57-bea7-4cfb-af44-2799c355d528", 00:10:01.483 "assigned_rate_limits": { 00:10:01.483 "rw_ios_per_sec": 0, 00:10:01.483 "rw_mbytes_per_sec": 0, 00:10:01.483 "r_mbytes_per_sec": 0, 00:10:01.484 "w_mbytes_per_sec": 0 00:10:01.484 }, 00:10:01.484 "claimed": true, 00:10:01.484 "claim_type": "exclusive_write", 00:10:01.484 "zoned": false, 00:10:01.484 "supported_io_types": { 00:10:01.484 "read": true, 00:10:01.484 "write": true, 00:10:01.484 "unmap": true, 00:10:01.484 "flush": true, 00:10:01.484 "reset": true, 00:10:01.484 "nvme_admin": false, 00:10:01.484 "nvme_io": false, 00:10:01.484 "nvme_io_md": false, 00:10:01.484 "write_zeroes": true, 00:10:01.484 "zcopy": true, 00:10:01.484 "get_zone_info": false, 00:10:01.484 "zone_management": false, 00:10:01.484 "zone_append": false, 00:10:01.484 "compare": false, 00:10:01.484 "compare_and_write": false, 00:10:01.484 "abort": true, 00:10:01.484 "seek_hole": false, 00:10:01.484 "seek_data": false, 00:10:01.484 "copy": true, 00:10:01.484 "nvme_iov_md": false 00:10:01.484 }, 00:10:01.484 "memory_domains": [ 00:10:01.484 { 00:10:01.484 "dma_device_id": "system", 00:10:01.484 "dma_device_type": 1 00:10:01.484 }, 00:10:01.484 { 00:10:01.484 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:01.484 "dma_device_type": 2 00:10:01.484 } 00:10:01.484 ], 00:10:01.484 "driver_specific": {} 00:10:01.484 } 00:10:01.484 ] 00:10:01.484 18:41:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.484 18:41:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:01.484 18:41:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:01.484 18:41:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:01.484 18:41:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:01.484 18:41:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:01.484 18:41:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:01.484 18:41:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:01.484 18:41:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:01.484 18:41:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:01.484 18:41:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:01.484 18:41:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:01.484 18:41:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.484 18:41:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:01.484 18:41:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.484 18:41:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.484 18:41:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.484 18:41:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:01.484 "name": "Existed_Raid", 00:10:01.484 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:01.484 "strip_size_kb": 0, 00:10:01.484 "state": "configuring", 00:10:01.484 "raid_level": "raid1", 00:10:01.484 "superblock": false, 00:10:01.484 "num_base_bdevs": 4, 00:10:01.484 "num_base_bdevs_discovered": 3, 00:10:01.484 "num_base_bdevs_operational": 4, 00:10:01.484 "base_bdevs_list": [ 00:10:01.484 { 00:10:01.484 "name": "BaseBdev1", 00:10:01.484 "uuid": "43022c57-bea7-4cfb-af44-2799c355d528", 00:10:01.484 "is_configured": true, 00:10:01.484 "data_offset": 0, 00:10:01.484 "data_size": 65536 00:10:01.484 }, 00:10:01.484 { 00:10:01.484 "name": null, 00:10:01.484 "uuid": "d4721225-6c39-441e-a6da-5a63755fe69a", 00:10:01.484 "is_configured": false, 00:10:01.484 "data_offset": 0, 00:10:01.484 "data_size": 65536 00:10:01.484 }, 00:10:01.484 { 00:10:01.484 "name": "BaseBdev3", 00:10:01.484 "uuid": "6f3b8397-9573-40cc-9f93-9b67b900ae95", 00:10:01.484 "is_configured": true, 00:10:01.484 "data_offset": 0, 00:10:01.484 "data_size": 65536 00:10:01.484 }, 00:10:01.484 { 00:10:01.484 "name": "BaseBdev4", 00:10:01.484 "uuid": "5c095e55-c396-4af1-ac06-f1b97d49995b", 00:10:01.484 "is_configured": true, 00:10:01.484 "data_offset": 0, 00:10:01.484 "data_size": 65536 00:10:01.484 } 00:10:01.484 ] 00:10:01.484 }' 00:10:01.484 18:41:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:01.484 18:41:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.743 18:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:01.743 18:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.743 18:41:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.743 18:41:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.743 18:41:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.743 18:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:01.743 18:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:01.743 18:41:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.743 18:41:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.743 [2024-12-15 18:41:02.118758] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:01.743 18:41:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.743 18:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:01.743 18:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:01.743 18:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:01.743 18:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:01.743 18:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:01.743 18:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:01.743 18:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:01.743 18:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:01.743 18:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:01.744 18:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:01.744 18:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.744 18:41:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.744 18:41:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.744 18:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:01.744 18:41:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.744 18:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:01.744 "name": "Existed_Raid", 00:10:01.744 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:01.744 "strip_size_kb": 0, 00:10:01.744 "state": "configuring", 00:10:01.744 "raid_level": "raid1", 00:10:01.744 "superblock": false, 00:10:01.744 "num_base_bdevs": 4, 00:10:01.744 "num_base_bdevs_discovered": 2, 00:10:01.744 "num_base_bdevs_operational": 4, 00:10:01.744 "base_bdevs_list": [ 00:10:01.744 { 00:10:01.744 "name": "BaseBdev1", 00:10:01.744 "uuid": "43022c57-bea7-4cfb-af44-2799c355d528", 00:10:01.744 "is_configured": true, 00:10:01.744 "data_offset": 0, 00:10:01.744 "data_size": 65536 00:10:01.744 }, 00:10:01.744 { 00:10:01.744 "name": null, 00:10:01.744 "uuid": "d4721225-6c39-441e-a6da-5a63755fe69a", 00:10:01.744 "is_configured": false, 00:10:01.744 "data_offset": 0, 00:10:01.744 "data_size": 65536 00:10:01.744 }, 00:10:01.744 { 00:10:01.744 "name": null, 00:10:01.744 "uuid": "6f3b8397-9573-40cc-9f93-9b67b900ae95", 00:10:01.744 "is_configured": false, 00:10:01.744 "data_offset": 0, 00:10:01.744 "data_size": 65536 00:10:01.744 }, 00:10:01.744 { 00:10:01.744 "name": "BaseBdev4", 00:10:01.744 "uuid": "5c095e55-c396-4af1-ac06-f1b97d49995b", 00:10:01.744 "is_configured": true, 00:10:01.744 "data_offset": 0, 00:10:01.744 "data_size": 65536 00:10:01.744 } 00:10:01.744 ] 00:10:01.744 }' 00:10:01.744 18:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:01.744 18:41:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.314 18:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.314 18:41:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.314 18:41:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.314 18:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:02.314 18:41:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.314 18:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:02.314 18:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:02.314 18:41:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.314 18:41:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.314 [2024-12-15 18:41:02.546013] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:02.314 18:41:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.314 18:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:02.314 18:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:02.314 18:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:02.314 18:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:02.314 18:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:02.314 18:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:02.314 18:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:02.314 18:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:02.314 18:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:02.314 18:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:02.314 18:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:02.314 18:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.314 18:41:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.314 18:41:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.314 18:41:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.314 18:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:02.314 "name": "Existed_Raid", 00:10:02.314 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.314 "strip_size_kb": 0, 00:10:02.314 "state": "configuring", 00:10:02.314 "raid_level": "raid1", 00:10:02.314 "superblock": false, 00:10:02.314 "num_base_bdevs": 4, 00:10:02.314 "num_base_bdevs_discovered": 3, 00:10:02.314 "num_base_bdevs_operational": 4, 00:10:02.314 "base_bdevs_list": [ 00:10:02.314 { 00:10:02.314 "name": "BaseBdev1", 00:10:02.314 "uuid": "43022c57-bea7-4cfb-af44-2799c355d528", 00:10:02.314 "is_configured": true, 00:10:02.314 "data_offset": 0, 00:10:02.314 "data_size": 65536 00:10:02.314 }, 00:10:02.314 { 00:10:02.314 "name": null, 00:10:02.314 "uuid": "d4721225-6c39-441e-a6da-5a63755fe69a", 00:10:02.314 "is_configured": false, 00:10:02.314 "data_offset": 0, 00:10:02.314 "data_size": 65536 00:10:02.314 }, 00:10:02.314 { 00:10:02.314 "name": "BaseBdev3", 00:10:02.314 "uuid": "6f3b8397-9573-40cc-9f93-9b67b900ae95", 00:10:02.314 "is_configured": true, 00:10:02.314 "data_offset": 0, 00:10:02.314 "data_size": 65536 00:10:02.314 }, 00:10:02.314 { 00:10:02.314 "name": "BaseBdev4", 00:10:02.314 "uuid": "5c095e55-c396-4af1-ac06-f1b97d49995b", 00:10:02.314 "is_configured": true, 00:10:02.314 "data_offset": 0, 00:10:02.314 "data_size": 65536 00:10:02.314 } 00:10:02.314 ] 00:10:02.314 }' 00:10:02.314 18:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:02.314 18:41:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.574 18:41:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.574 18:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.574 18:41:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:02.574 18:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.834 18:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.834 18:41:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:02.834 18:41:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:02.834 18:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.834 18:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.834 [2024-12-15 18:41:03.061187] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:02.834 18:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.834 18:41:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:02.834 18:41:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:02.835 18:41:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:02.835 18:41:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:02.835 18:41:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:02.835 18:41:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:02.835 18:41:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:02.835 18:41:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:02.835 18:41:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:02.835 18:41:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:02.835 18:41:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:02.835 18:41:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.835 18:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.835 18:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.835 18:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.835 18:41:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:02.835 "name": "Existed_Raid", 00:10:02.835 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.835 "strip_size_kb": 0, 00:10:02.835 "state": "configuring", 00:10:02.835 "raid_level": "raid1", 00:10:02.835 "superblock": false, 00:10:02.835 "num_base_bdevs": 4, 00:10:02.835 "num_base_bdevs_discovered": 2, 00:10:02.835 "num_base_bdevs_operational": 4, 00:10:02.835 "base_bdevs_list": [ 00:10:02.835 { 00:10:02.835 "name": null, 00:10:02.835 "uuid": "43022c57-bea7-4cfb-af44-2799c355d528", 00:10:02.835 "is_configured": false, 00:10:02.835 "data_offset": 0, 00:10:02.835 "data_size": 65536 00:10:02.835 }, 00:10:02.835 { 00:10:02.835 "name": null, 00:10:02.835 "uuid": "d4721225-6c39-441e-a6da-5a63755fe69a", 00:10:02.835 "is_configured": false, 00:10:02.835 "data_offset": 0, 00:10:02.835 "data_size": 65536 00:10:02.835 }, 00:10:02.835 { 00:10:02.835 "name": "BaseBdev3", 00:10:02.835 "uuid": "6f3b8397-9573-40cc-9f93-9b67b900ae95", 00:10:02.835 "is_configured": true, 00:10:02.835 "data_offset": 0, 00:10:02.835 "data_size": 65536 00:10:02.835 }, 00:10:02.835 { 00:10:02.835 "name": "BaseBdev4", 00:10:02.835 "uuid": "5c095e55-c396-4af1-ac06-f1b97d49995b", 00:10:02.835 "is_configured": true, 00:10:02.835 "data_offset": 0, 00:10:02.835 "data_size": 65536 00:10:02.835 } 00:10:02.835 ] 00:10:02.835 }' 00:10:02.835 18:41:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:02.835 18:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.095 18:41:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.095 18:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.095 18:41:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:03.095 18:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.353 18:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.353 18:41:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:03.353 18:41:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:03.353 18:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.353 18:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.353 [2024-12-15 18:41:03.578925] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:03.354 18:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.354 18:41:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:03.354 18:41:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:03.354 18:41:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:03.354 18:41:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:03.354 18:41:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:03.354 18:41:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:03.354 18:41:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:03.354 18:41:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:03.354 18:41:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:03.354 18:41:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:03.354 18:41:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.354 18:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.354 18:41:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:03.354 18:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.354 18:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.354 18:41:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:03.354 "name": "Existed_Raid", 00:10:03.354 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.354 "strip_size_kb": 0, 00:10:03.354 "state": "configuring", 00:10:03.354 "raid_level": "raid1", 00:10:03.354 "superblock": false, 00:10:03.354 "num_base_bdevs": 4, 00:10:03.354 "num_base_bdevs_discovered": 3, 00:10:03.354 "num_base_bdevs_operational": 4, 00:10:03.354 "base_bdevs_list": [ 00:10:03.354 { 00:10:03.354 "name": null, 00:10:03.354 "uuid": "43022c57-bea7-4cfb-af44-2799c355d528", 00:10:03.354 "is_configured": false, 00:10:03.354 "data_offset": 0, 00:10:03.354 "data_size": 65536 00:10:03.354 }, 00:10:03.354 { 00:10:03.354 "name": "BaseBdev2", 00:10:03.354 "uuid": "d4721225-6c39-441e-a6da-5a63755fe69a", 00:10:03.354 "is_configured": true, 00:10:03.354 "data_offset": 0, 00:10:03.354 "data_size": 65536 00:10:03.354 }, 00:10:03.354 { 00:10:03.354 "name": "BaseBdev3", 00:10:03.354 "uuid": "6f3b8397-9573-40cc-9f93-9b67b900ae95", 00:10:03.354 "is_configured": true, 00:10:03.354 "data_offset": 0, 00:10:03.354 "data_size": 65536 00:10:03.354 }, 00:10:03.354 { 00:10:03.354 "name": "BaseBdev4", 00:10:03.354 "uuid": "5c095e55-c396-4af1-ac06-f1b97d49995b", 00:10:03.354 "is_configured": true, 00:10:03.354 "data_offset": 0, 00:10:03.354 "data_size": 65536 00:10:03.354 } 00:10:03.354 ] 00:10:03.354 }' 00:10:03.354 18:41:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:03.354 18:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.612 18:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.612 18:41:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.612 18:41:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.612 18:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:03.612 18:41:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.871 18:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:03.871 18:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.871 18:41:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.871 18:41:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.871 18:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:03.871 18:41:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.871 18:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 43022c57-bea7-4cfb-af44-2799c355d528 00:10:03.871 18:41:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.871 18:41:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.871 [2024-12-15 18:41:04.140994] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:03.871 [2024-12-15 18:41:04.141114] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:10:03.871 [2024-12-15 18:41:04.141142] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:10:03.871 [2024-12-15 18:41:04.141428] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:03.871 [2024-12-15 18:41:04.141608] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:10:03.871 [2024-12-15 18:41:04.141650] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raNewBaseBdev 00:10:03.871 id_bdev 0x617000006d00 00:10:03.871 [2024-12-15 18:41:04.141872] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:03.871 18:41:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.871 18:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:03.871 18:41:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:03.871 18:41:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:03.871 18:41:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:03.871 18:41:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:03.871 18:41:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:03.871 18:41:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:03.871 18:41:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.871 18:41:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.871 18:41:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.871 18:41:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:03.871 18:41:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.871 18:41:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.871 [ 00:10:03.871 { 00:10:03.871 "name": "NewBaseBdev", 00:10:03.871 "aliases": [ 00:10:03.871 "43022c57-bea7-4cfb-af44-2799c355d528" 00:10:03.871 ], 00:10:03.871 "product_name": "Malloc disk", 00:10:03.871 "block_size": 512, 00:10:03.871 "num_blocks": 65536, 00:10:03.871 "uuid": "43022c57-bea7-4cfb-af44-2799c355d528", 00:10:03.871 "assigned_rate_limits": { 00:10:03.871 "rw_ios_per_sec": 0, 00:10:03.871 "rw_mbytes_per_sec": 0, 00:10:03.871 "r_mbytes_per_sec": 0, 00:10:03.871 "w_mbytes_per_sec": 0 00:10:03.871 }, 00:10:03.871 "claimed": true, 00:10:03.871 "claim_type": "exclusive_write", 00:10:03.872 "zoned": false, 00:10:03.872 "supported_io_types": { 00:10:03.872 "read": true, 00:10:03.872 "write": true, 00:10:03.872 "unmap": true, 00:10:03.872 "flush": true, 00:10:03.872 "reset": true, 00:10:03.872 "nvme_admin": false, 00:10:03.872 "nvme_io": false, 00:10:03.872 "nvme_io_md": false, 00:10:03.872 "write_zeroes": true, 00:10:03.872 "zcopy": true, 00:10:03.872 "get_zone_info": false, 00:10:03.872 "zone_management": false, 00:10:03.872 "zone_append": false, 00:10:03.872 "compare": false, 00:10:03.872 "compare_and_write": false, 00:10:03.872 "abort": true, 00:10:03.872 "seek_hole": false, 00:10:03.872 "seek_data": false, 00:10:03.872 "copy": true, 00:10:03.872 "nvme_iov_md": false 00:10:03.872 }, 00:10:03.872 "memory_domains": [ 00:10:03.872 { 00:10:03.872 "dma_device_id": "system", 00:10:03.872 "dma_device_type": 1 00:10:03.872 }, 00:10:03.872 { 00:10:03.872 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:03.872 "dma_device_type": 2 00:10:03.872 } 00:10:03.872 ], 00:10:03.872 "driver_specific": {} 00:10:03.872 } 00:10:03.872 ] 00:10:03.872 18:41:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.872 18:41:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:03.872 18:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:10:03.872 18:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:03.872 18:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:03.872 18:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:03.872 18:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:03.872 18:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:03.872 18:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:03.872 18:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:03.872 18:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:03.872 18:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:03.872 18:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.872 18:41:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.872 18:41:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.872 18:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:03.872 18:41:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.872 18:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:03.872 "name": "Existed_Raid", 00:10:03.872 "uuid": "fd1db485-6a7f-4344-91f6-23779a4e8b34", 00:10:03.872 "strip_size_kb": 0, 00:10:03.872 "state": "online", 00:10:03.872 "raid_level": "raid1", 00:10:03.872 "superblock": false, 00:10:03.872 "num_base_bdevs": 4, 00:10:03.872 "num_base_bdevs_discovered": 4, 00:10:03.872 "num_base_bdevs_operational": 4, 00:10:03.872 "base_bdevs_list": [ 00:10:03.872 { 00:10:03.872 "name": "NewBaseBdev", 00:10:03.872 "uuid": "43022c57-bea7-4cfb-af44-2799c355d528", 00:10:03.872 "is_configured": true, 00:10:03.872 "data_offset": 0, 00:10:03.872 "data_size": 65536 00:10:03.872 }, 00:10:03.872 { 00:10:03.872 "name": "BaseBdev2", 00:10:03.872 "uuid": "d4721225-6c39-441e-a6da-5a63755fe69a", 00:10:03.872 "is_configured": true, 00:10:03.872 "data_offset": 0, 00:10:03.872 "data_size": 65536 00:10:03.872 }, 00:10:03.872 { 00:10:03.872 "name": "BaseBdev3", 00:10:03.872 "uuid": "6f3b8397-9573-40cc-9f93-9b67b900ae95", 00:10:03.872 "is_configured": true, 00:10:03.872 "data_offset": 0, 00:10:03.872 "data_size": 65536 00:10:03.872 }, 00:10:03.872 { 00:10:03.872 "name": "BaseBdev4", 00:10:03.872 "uuid": "5c095e55-c396-4af1-ac06-f1b97d49995b", 00:10:03.872 "is_configured": true, 00:10:03.872 "data_offset": 0, 00:10:03.872 "data_size": 65536 00:10:03.872 } 00:10:03.872 ] 00:10:03.872 }' 00:10:03.872 18:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:03.872 18:41:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.439 18:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:04.439 18:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:04.439 18:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:04.439 18:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:04.439 18:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:04.439 18:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:04.439 18:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:04.440 18:41:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.440 18:41:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.440 18:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:04.440 [2024-12-15 18:41:04.656582] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:04.440 18:41:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.440 18:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:04.440 "name": "Existed_Raid", 00:10:04.440 "aliases": [ 00:10:04.440 "fd1db485-6a7f-4344-91f6-23779a4e8b34" 00:10:04.440 ], 00:10:04.440 "product_name": "Raid Volume", 00:10:04.440 "block_size": 512, 00:10:04.440 "num_blocks": 65536, 00:10:04.440 "uuid": "fd1db485-6a7f-4344-91f6-23779a4e8b34", 00:10:04.440 "assigned_rate_limits": { 00:10:04.440 "rw_ios_per_sec": 0, 00:10:04.440 "rw_mbytes_per_sec": 0, 00:10:04.440 "r_mbytes_per_sec": 0, 00:10:04.440 "w_mbytes_per_sec": 0 00:10:04.440 }, 00:10:04.440 "claimed": false, 00:10:04.440 "zoned": false, 00:10:04.440 "supported_io_types": { 00:10:04.440 "read": true, 00:10:04.440 "write": true, 00:10:04.440 "unmap": false, 00:10:04.440 "flush": false, 00:10:04.440 "reset": true, 00:10:04.440 "nvme_admin": false, 00:10:04.440 "nvme_io": false, 00:10:04.440 "nvme_io_md": false, 00:10:04.440 "write_zeroes": true, 00:10:04.440 "zcopy": false, 00:10:04.440 "get_zone_info": false, 00:10:04.440 "zone_management": false, 00:10:04.440 "zone_append": false, 00:10:04.440 "compare": false, 00:10:04.440 "compare_and_write": false, 00:10:04.440 "abort": false, 00:10:04.440 "seek_hole": false, 00:10:04.440 "seek_data": false, 00:10:04.440 "copy": false, 00:10:04.440 "nvme_iov_md": false 00:10:04.440 }, 00:10:04.440 "memory_domains": [ 00:10:04.440 { 00:10:04.440 "dma_device_id": "system", 00:10:04.440 "dma_device_type": 1 00:10:04.440 }, 00:10:04.440 { 00:10:04.440 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:04.440 "dma_device_type": 2 00:10:04.440 }, 00:10:04.440 { 00:10:04.440 "dma_device_id": "system", 00:10:04.440 "dma_device_type": 1 00:10:04.440 }, 00:10:04.440 { 00:10:04.440 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:04.440 "dma_device_type": 2 00:10:04.440 }, 00:10:04.440 { 00:10:04.440 "dma_device_id": "system", 00:10:04.440 "dma_device_type": 1 00:10:04.440 }, 00:10:04.440 { 00:10:04.440 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:04.440 "dma_device_type": 2 00:10:04.440 }, 00:10:04.440 { 00:10:04.440 "dma_device_id": "system", 00:10:04.440 "dma_device_type": 1 00:10:04.440 }, 00:10:04.440 { 00:10:04.440 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:04.440 "dma_device_type": 2 00:10:04.440 } 00:10:04.440 ], 00:10:04.440 "driver_specific": { 00:10:04.440 "raid": { 00:10:04.440 "uuid": "fd1db485-6a7f-4344-91f6-23779a4e8b34", 00:10:04.440 "strip_size_kb": 0, 00:10:04.440 "state": "online", 00:10:04.440 "raid_level": "raid1", 00:10:04.440 "superblock": false, 00:10:04.440 "num_base_bdevs": 4, 00:10:04.440 "num_base_bdevs_discovered": 4, 00:10:04.440 "num_base_bdevs_operational": 4, 00:10:04.440 "base_bdevs_list": [ 00:10:04.440 { 00:10:04.440 "name": "NewBaseBdev", 00:10:04.440 "uuid": "43022c57-bea7-4cfb-af44-2799c355d528", 00:10:04.440 "is_configured": true, 00:10:04.440 "data_offset": 0, 00:10:04.440 "data_size": 65536 00:10:04.440 }, 00:10:04.440 { 00:10:04.440 "name": "BaseBdev2", 00:10:04.440 "uuid": "d4721225-6c39-441e-a6da-5a63755fe69a", 00:10:04.440 "is_configured": true, 00:10:04.440 "data_offset": 0, 00:10:04.440 "data_size": 65536 00:10:04.440 }, 00:10:04.440 { 00:10:04.440 "name": "BaseBdev3", 00:10:04.440 "uuid": "6f3b8397-9573-40cc-9f93-9b67b900ae95", 00:10:04.440 "is_configured": true, 00:10:04.440 "data_offset": 0, 00:10:04.440 "data_size": 65536 00:10:04.440 }, 00:10:04.440 { 00:10:04.440 "name": "BaseBdev4", 00:10:04.440 "uuid": "5c095e55-c396-4af1-ac06-f1b97d49995b", 00:10:04.440 "is_configured": true, 00:10:04.440 "data_offset": 0, 00:10:04.440 "data_size": 65536 00:10:04.440 } 00:10:04.440 ] 00:10:04.440 } 00:10:04.440 } 00:10:04.440 }' 00:10:04.440 18:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:04.440 18:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:04.440 BaseBdev2 00:10:04.440 BaseBdev3 00:10:04.440 BaseBdev4' 00:10:04.440 18:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:04.440 18:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:04.440 18:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:04.440 18:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:04.440 18:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:04.440 18:41:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.440 18:41:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.440 18:41:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.440 18:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:04.440 18:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:04.440 18:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:04.440 18:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:04.440 18:41:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.440 18:41:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.440 18:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:04.440 18:41:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.700 18:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:04.700 18:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:04.700 18:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:04.700 18:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:04.700 18:41:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.700 18:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:04.700 18:41:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.700 18:41:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.700 18:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:04.700 18:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:04.700 18:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:04.700 18:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:04.700 18:41:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.700 18:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:04.700 18:41:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.700 18:41:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.700 18:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:04.700 18:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:04.700 18:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:04.700 18:41:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.700 18:41:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.700 [2024-12-15 18:41:05.003685] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:04.700 [2024-12-15 18:41:05.003753] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:04.700 [2024-12-15 18:41:05.003868] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:04.700 [2024-12-15 18:41:05.004142] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:04.700 [2024-12-15 18:41:05.004198] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:10:04.700 18:41:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.700 18:41:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 85896 00:10:04.700 18:41:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 85896 ']' 00:10:04.700 18:41:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 85896 00:10:04.700 18:41:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:10:04.700 18:41:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:04.700 18:41:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85896 00:10:04.700 killing process with pid 85896 00:10:04.700 18:41:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:04.700 18:41:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:04.700 18:41:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85896' 00:10:04.700 18:41:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 85896 00:10:04.700 [2024-12-15 18:41:05.052402] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:04.700 18:41:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 85896 00:10:04.700 [2024-12-15 18:41:05.093955] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:04.960 18:41:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:04.960 ************************************ 00:10:04.960 END TEST raid_state_function_test 00:10:04.960 ************************************ 00:10:04.960 00:10:04.960 real 0m9.760s 00:10:04.960 user 0m16.605s 00:10:04.960 sys 0m2.168s 00:10:04.960 18:41:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:04.960 18:41:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.960 18:41:05 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:10:04.960 18:41:05 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:04.960 18:41:05 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:04.960 18:41:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:04.960 ************************************ 00:10:04.960 START TEST raid_state_function_test_sb 00:10:04.960 ************************************ 00:10:04.960 18:41:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 true 00:10:04.960 18:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:10:04.960 18:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:04.960 18:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:04.960 18:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:04.960 18:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:04.960 18:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:04.960 18:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:04.960 18:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:04.960 18:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:04.960 18:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:04.960 18:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:04.960 18:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:04.960 18:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:04.960 18:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:04.960 18:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:04.960 18:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:04.960 18:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:04.960 18:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:04.960 18:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:04.960 18:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:04.960 18:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:04.960 18:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:04.960 18:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:04.960 18:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:04.960 18:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:10:04.960 18:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:10:04.960 18:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:04.960 18:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:04.960 18:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=86552 00:10:04.960 18:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:04.960 18:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 86552' 00:10:04.960 Process raid pid: 86552 00:10:04.960 18:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 86552 00:10:04.960 18:41:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 86552 ']' 00:10:04.960 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:04.960 18:41:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:04.960 18:41:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:04.960 18:41:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:04.960 18:41:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:04.960 18:41:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.220 [2024-12-15 18:41:05.473012] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:10:05.220 [2024-12-15 18:41:05.473137] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:05.220 [2024-12-15 18:41:05.643532] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:05.480 [2024-12-15 18:41:05.670257] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:05.480 [2024-12-15 18:41:05.712640] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:05.480 [2024-12-15 18:41:05.712681] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:06.050 18:41:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:06.050 18:41:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:10:06.051 18:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:06.051 18:41:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.051 18:41:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.051 [2024-12-15 18:41:06.331646] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:06.051 [2024-12-15 18:41:06.331748] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:06.051 [2024-12-15 18:41:06.331763] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:06.051 [2024-12-15 18:41:06.331773] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:06.051 [2024-12-15 18:41:06.331779] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:06.051 [2024-12-15 18:41:06.331790] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:06.051 [2024-12-15 18:41:06.331796] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:06.051 [2024-12-15 18:41:06.331818] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:06.051 18:41:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.051 18:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:06.051 18:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:06.051 18:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:06.051 18:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:06.051 18:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:06.051 18:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:06.051 18:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:06.051 18:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:06.051 18:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:06.051 18:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:06.051 18:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.051 18:41:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.051 18:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:06.051 18:41:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.051 18:41:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.051 18:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:06.051 "name": "Existed_Raid", 00:10:06.051 "uuid": "00576208-a240-4f0c-9259-0734bc89d562", 00:10:06.051 "strip_size_kb": 0, 00:10:06.051 "state": "configuring", 00:10:06.051 "raid_level": "raid1", 00:10:06.051 "superblock": true, 00:10:06.051 "num_base_bdevs": 4, 00:10:06.051 "num_base_bdevs_discovered": 0, 00:10:06.051 "num_base_bdevs_operational": 4, 00:10:06.051 "base_bdevs_list": [ 00:10:06.051 { 00:10:06.051 "name": "BaseBdev1", 00:10:06.051 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:06.051 "is_configured": false, 00:10:06.051 "data_offset": 0, 00:10:06.051 "data_size": 0 00:10:06.051 }, 00:10:06.051 { 00:10:06.051 "name": "BaseBdev2", 00:10:06.051 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:06.051 "is_configured": false, 00:10:06.051 "data_offset": 0, 00:10:06.051 "data_size": 0 00:10:06.051 }, 00:10:06.051 { 00:10:06.051 "name": "BaseBdev3", 00:10:06.051 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:06.051 "is_configured": false, 00:10:06.051 "data_offset": 0, 00:10:06.051 "data_size": 0 00:10:06.051 }, 00:10:06.051 { 00:10:06.051 "name": "BaseBdev4", 00:10:06.051 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:06.051 "is_configured": false, 00:10:06.051 "data_offset": 0, 00:10:06.051 "data_size": 0 00:10:06.051 } 00:10:06.051 ] 00:10:06.051 }' 00:10:06.051 18:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:06.051 18:41:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.624 18:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:06.624 18:41:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.624 18:41:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.624 [2024-12-15 18:41:06.842677] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:06.624 [2024-12-15 18:41:06.842725] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:10:06.624 18:41:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.624 18:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:06.624 18:41:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.624 18:41:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.624 [2024-12-15 18:41:06.854655] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:06.624 [2024-12-15 18:41:06.854696] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:06.624 [2024-12-15 18:41:06.854705] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:06.624 [2024-12-15 18:41:06.854715] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:06.624 [2024-12-15 18:41:06.854721] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:06.624 [2024-12-15 18:41:06.854729] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:06.624 [2024-12-15 18:41:06.854735] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:06.624 [2024-12-15 18:41:06.854744] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:06.624 18:41:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.624 18:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:06.624 18:41:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.624 18:41:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.624 [2024-12-15 18:41:06.875663] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:06.624 BaseBdev1 00:10:06.624 18:41:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.625 18:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:06.625 18:41:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:06.625 18:41:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:06.625 18:41:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:06.625 18:41:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:06.625 18:41:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:06.625 18:41:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:06.625 18:41:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.625 18:41:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.625 18:41:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.625 18:41:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:06.625 18:41:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.625 18:41:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.625 [ 00:10:06.625 { 00:10:06.625 "name": "BaseBdev1", 00:10:06.625 "aliases": [ 00:10:06.625 "aa990e0e-7d15-4d7f-8ff5-f27fced2d167" 00:10:06.625 ], 00:10:06.625 "product_name": "Malloc disk", 00:10:06.625 "block_size": 512, 00:10:06.625 "num_blocks": 65536, 00:10:06.625 "uuid": "aa990e0e-7d15-4d7f-8ff5-f27fced2d167", 00:10:06.625 "assigned_rate_limits": { 00:10:06.625 "rw_ios_per_sec": 0, 00:10:06.625 "rw_mbytes_per_sec": 0, 00:10:06.625 "r_mbytes_per_sec": 0, 00:10:06.625 "w_mbytes_per_sec": 0 00:10:06.625 }, 00:10:06.625 "claimed": true, 00:10:06.625 "claim_type": "exclusive_write", 00:10:06.625 "zoned": false, 00:10:06.625 "supported_io_types": { 00:10:06.625 "read": true, 00:10:06.625 "write": true, 00:10:06.625 "unmap": true, 00:10:06.625 "flush": true, 00:10:06.625 "reset": true, 00:10:06.625 "nvme_admin": false, 00:10:06.625 "nvme_io": false, 00:10:06.625 "nvme_io_md": false, 00:10:06.625 "write_zeroes": true, 00:10:06.625 "zcopy": true, 00:10:06.625 "get_zone_info": false, 00:10:06.625 "zone_management": false, 00:10:06.625 "zone_append": false, 00:10:06.625 "compare": false, 00:10:06.625 "compare_and_write": false, 00:10:06.625 "abort": true, 00:10:06.625 "seek_hole": false, 00:10:06.625 "seek_data": false, 00:10:06.625 "copy": true, 00:10:06.625 "nvme_iov_md": false 00:10:06.625 }, 00:10:06.625 "memory_domains": [ 00:10:06.625 { 00:10:06.625 "dma_device_id": "system", 00:10:06.625 "dma_device_type": 1 00:10:06.625 }, 00:10:06.625 { 00:10:06.625 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:06.625 "dma_device_type": 2 00:10:06.625 } 00:10:06.625 ], 00:10:06.625 "driver_specific": {} 00:10:06.625 } 00:10:06.625 ] 00:10:06.625 18:41:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.625 18:41:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:06.625 18:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:06.625 18:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:06.625 18:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:06.625 18:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:06.625 18:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:06.625 18:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:06.625 18:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:06.625 18:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:06.625 18:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:06.625 18:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:06.625 18:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.625 18:41:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.625 18:41:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.625 18:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:06.625 18:41:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.625 18:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:06.625 "name": "Existed_Raid", 00:10:06.625 "uuid": "412a8066-0331-4f82-b262-810d2f9277ab", 00:10:06.625 "strip_size_kb": 0, 00:10:06.625 "state": "configuring", 00:10:06.625 "raid_level": "raid1", 00:10:06.625 "superblock": true, 00:10:06.625 "num_base_bdevs": 4, 00:10:06.625 "num_base_bdevs_discovered": 1, 00:10:06.625 "num_base_bdevs_operational": 4, 00:10:06.625 "base_bdevs_list": [ 00:10:06.625 { 00:10:06.625 "name": "BaseBdev1", 00:10:06.625 "uuid": "aa990e0e-7d15-4d7f-8ff5-f27fced2d167", 00:10:06.625 "is_configured": true, 00:10:06.625 "data_offset": 2048, 00:10:06.625 "data_size": 63488 00:10:06.625 }, 00:10:06.625 { 00:10:06.625 "name": "BaseBdev2", 00:10:06.625 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:06.625 "is_configured": false, 00:10:06.625 "data_offset": 0, 00:10:06.625 "data_size": 0 00:10:06.625 }, 00:10:06.625 { 00:10:06.625 "name": "BaseBdev3", 00:10:06.625 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:06.625 "is_configured": false, 00:10:06.625 "data_offset": 0, 00:10:06.625 "data_size": 0 00:10:06.625 }, 00:10:06.625 { 00:10:06.625 "name": "BaseBdev4", 00:10:06.625 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:06.625 "is_configured": false, 00:10:06.625 "data_offset": 0, 00:10:06.625 "data_size": 0 00:10:06.625 } 00:10:06.625 ] 00:10:06.625 }' 00:10:06.625 18:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:06.625 18:41:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.196 18:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:07.196 18:41:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.196 18:41:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.196 [2024-12-15 18:41:07.398875] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:07.196 [2024-12-15 18:41:07.398932] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:10:07.196 18:41:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.196 18:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:07.196 18:41:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.196 18:41:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.196 [2024-12-15 18:41:07.410881] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:07.196 [2024-12-15 18:41:07.412838] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:07.196 [2024-12-15 18:41:07.412934] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:07.196 [2024-12-15 18:41:07.412948] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:07.196 [2024-12-15 18:41:07.412957] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:07.196 [2024-12-15 18:41:07.412964] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:07.196 [2024-12-15 18:41:07.412973] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:07.196 18:41:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.196 18:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:07.196 18:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:07.196 18:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:07.196 18:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:07.196 18:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:07.196 18:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:07.196 18:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:07.196 18:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:07.196 18:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:07.196 18:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:07.196 18:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:07.196 18:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:07.196 18:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:07.196 18:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.196 18:41:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.196 18:41:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.196 18:41:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.196 18:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:07.196 "name": "Existed_Raid", 00:10:07.196 "uuid": "d4ccabfa-bbeb-4118-9a44-61c1fc65c921", 00:10:07.196 "strip_size_kb": 0, 00:10:07.196 "state": "configuring", 00:10:07.196 "raid_level": "raid1", 00:10:07.196 "superblock": true, 00:10:07.196 "num_base_bdevs": 4, 00:10:07.196 "num_base_bdevs_discovered": 1, 00:10:07.196 "num_base_bdevs_operational": 4, 00:10:07.196 "base_bdevs_list": [ 00:10:07.196 { 00:10:07.196 "name": "BaseBdev1", 00:10:07.196 "uuid": "aa990e0e-7d15-4d7f-8ff5-f27fced2d167", 00:10:07.196 "is_configured": true, 00:10:07.196 "data_offset": 2048, 00:10:07.196 "data_size": 63488 00:10:07.196 }, 00:10:07.196 { 00:10:07.196 "name": "BaseBdev2", 00:10:07.196 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:07.196 "is_configured": false, 00:10:07.196 "data_offset": 0, 00:10:07.196 "data_size": 0 00:10:07.196 }, 00:10:07.196 { 00:10:07.196 "name": "BaseBdev3", 00:10:07.196 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:07.196 "is_configured": false, 00:10:07.196 "data_offset": 0, 00:10:07.197 "data_size": 0 00:10:07.197 }, 00:10:07.197 { 00:10:07.197 "name": "BaseBdev4", 00:10:07.197 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:07.197 "is_configured": false, 00:10:07.197 "data_offset": 0, 00:10:07.197 "data_size": 0 00:10:07.197 } 00:10:07.197 ] 00:10:07.197 }' 00:10:07.197 18:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:07.197 18:41:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.456 18:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:07.456 18:41:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.456 18:41:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.456 [2024-12-15 18:41:07.853130] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:07.456 BaseBdev2 00:10:07.456 18:41:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.456 18:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:07.456 18:41:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:07.456 18:41:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:07.456 18:41:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:07.456 18:41:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:07.456 18:41:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:07.456 18:41:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:07.456 18:41:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.456 18:41:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.456 18:41:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.456 18:41:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:07.456 18:41:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.456 18:41:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.456 [ 00:10:07.456 { 00:10:07.456 "name": "BaseBdev2", 00:10:07.456 "aliases": [ 00:10:07.456 "609290ff-79c4-4337-9427-24b76959e8f3" 00:10:07.456 ], 00:10:07.456 "product_name": "Malloc disk", 00:10:07.456 "block_size": 512, 00:10:07.456 "num_blocks": 65536, 00:10:07.456 "uuid": "609290ff-79c4-4337-9427-24b76959e8f3", 00:10:07.456 "assigned_rate_limits": { 00:10:07.456 "rw_ios_per_sec": 0, 00:10:07.456 "rw_mbytes_per_sec": 0, 00:10:07.456 "r_mbytes_per_sec": 0, 00:10:07.456 "w_mbytes_per_sec": 0 00:10:07.456 }, 00:10:07.456 "claimed": true, 00:10:07.456 "claim_type": "exclusive_write", 00:10:07.456 "zoned": false, 00:10:07.456 "supported_io_types": { 00:10:07.456 "read": true, 00:10:07.456 "write": true, 00:10:07.456 "unmap": true, 00:10:07.456 "flush": true, 00:10:07.456 "reset": true, 00:10:07.456 "nvme_admin": false, 00:10:07.456 "nvme_io": false, 00:10:07.456 "nvme_io_md": false, 00:10:07.456 "write_zeroes": true, 00:10:07.456 "zcopy": true, 00:10:07.456 "get_zone_info": false, 00:10:07.456 "zone_management": false, 00:10:07.456 "zone_append": false, 00:10:07.456 "compare": false, 00:10:07.456 "compare_and_write": false, 00:10:07.456 "abort": true, 00:10:07.456 "seek_hole": false, 00:10:07.456 "seek_data": false, 00:10:07.456 "copy": true, 00:10:07.456 "nvme_iov_md": false 00:10:07.456 }, 00:10:07.456 "memory_domains": [ 00:10:07.456 { 00:10:07.456 "dma_device_id": "system", 00:10:07.456 "dma_device_type": 1 00:10:07.456 }, 00:10:07.456 { 00:10:07.456 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:07.456 "dma_device_type": 2 00:10:07.456 } 00:10:07.456 ], 00:10:07.456 "driver_specific": {} 00:10:07.456 } 00:10:07.456 ] 00:10:07.456 18:41:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.456 18:41:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:07.456 18:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:07.456 18:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:07.456 18:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:07.456 18:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:07.456 18:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:07.456 18:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:07.456 18:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:07.456 18:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:07.456 18:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:07.456 18:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:07.456 18:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:07.456 18:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:07.714 18:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:07.714 18:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.714 18:41:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.714 18:41:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.714 18:41:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.714 18:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:07.714 "name": "Existed_Raid", 00:10:07.714 "uuid": "d4ccabfa-bbeb-4118-9a44-61c1fc65c921", 00:10:07.714 "strip_size_kb": 0, 00:10:07.714 "state": "configuring", 00:10:07.714 "raid_level": "raid1", 00:10:07.714 "superblock": true, 00:10:07.714 "num_base_bdevs": 4, 00:10:07.714 "num_base_bdevs_discovered": 2, 00:10:07.715 "num_base_bdevs_operational": 4, 00:10:07.715 "base_bdevs_list": [ 00:10:07.715 { 00:10:07.715 "name": "BaseBdev1", 00:10:07.715 "uuid": "aa990e0e-7d15-4d7f-8ff5-f27fced2d167", 00:10:07.715 "is_configured": true, 00:10:07.715 "data_offset": 2048, 00:10:07.715 "data_size": 63488 00:10:07.715 }, 00:10:07.715 { 00:10:07.715 "name": "BaseBdev2", 00:10:07.715 "uuid": "609290ff-79c4-4337-9427-24b76959e8f3", 00:10:07.715 "is_configured": true, 00:10:07.715 "data_offset": 2048, 00:10:07.715 "data_size": 63488 00:10:07.715 }, 00:10:07.715 { 00:10:07.715 "name": "BaseBdev3", 00:10:07.715 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:07.715 "is_configured": false, 00:10:07.715 "data_offset": 0, 00:10:07.715 "data_size": 0 00:10:07.715 }, 00:10:07.715 { 00:10:07.715 "name": "BaseBdev4", 00:10:07.715 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:07.715 "is_configured": false, 00:10:07.715 "data_offset": 0, 00:10:07.715 "data_size": 0 00:10:07.715 } 00:10:07.715 ] 00:10:07.715 }' 00:10:07.715 18:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:07.715 18:41:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.974 18:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:07.974 18:41:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.974 18:41:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.974 [2024-12-15 18:41:08.312677] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:07.974 BaseBdev3 00:10:07.974 18:41:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.974 18:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:07.974 18:41:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:07.974 18:41:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:07.974 18:41:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:07.974 18:41:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:07.974 18:41:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:07.974 18:41:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:07.974 18:41:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.974 18:41:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.974 18:41:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.974 18:41:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:07.974 18:41:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.974 18:41:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.974 [ 00:10:07.974 { 00:10:07.974 "name": "BaseBdev3", 00:10:07.974 "aliases": [ 00:10:07.974 "f4176fc9-87d0-43eb-989e-e062c182eeec" 00:10:07.974 ], 00:10:07.974 "product_name": "Malloc disk", 00:10:07.974 "block_size": 512, 00:10:07.974 "num_blocks": 65536, 00:10:07.974 "uuid": "f4176fc9-87d0-43eb-989e-e062c182eeec", 00:10:07.974 "assigned_rate_limits": { 00:10:07.974 "rw_ios_per_sec": 0, 00:10:07.974 "rw_mbytes_per_sec": 0, 00:10:07.974 "r_mbytes_per_sec": 0, 00:10:07.974 "w_mbytes_per_sec": 0 00:10:07.974 }, 00:10:07.974 "claimed": true, 00:10:07.974 "claim_type": "exclusive_write", 00:10:07.974 "zoned": false, 00:10:07.974 "supported_io_types": { 00:10:07.974 "read": true, 00:10:07.974 "write": true, 00:10:07.974 "unmap": true, 00:10:07.974 "flush": true, 00:10:07.974 "reset": true, 00:10:07.974 "nvme_admin": false, 00:10:07.974 "nvme_io": false, 00:10:07.974 "nvme_io_md": false, 00:10:07.974 "write_zeroes": true, 00:10:07.974 "zcopy": true, 00:10:07.974 "get_zone_info": false, 00:10:07.974 "zone_management": false, 00:10:07.974 "zone_append": false, 00:10:07.974 "compare": false, 00:10:07.974 "compare_and_write": false, 00:10:07.974 "abort": true, 00:10:07.974 "seek_hole": false, 00:10:07.974 "seek_data": false, 00:10:07.974 "copy": true, 00:10:07.974 "nvme_iov_md": false 00:10:07.974 }, 00:10:07.974 "memory_domains": [ 00:10:07.974 { 00:10:07.974 "dma_device_id": "system", 00:10:07.974 "dma_device_type": 1 00:10:07.974 }, 00:10:07.974 { 00:10:07.974 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:07.974 "dma_device_type": 2 00:10:07.974 } 00:10:07.974 ], 00:10:07.974 "driver_specific": {} 00:10:07.974 } 00:10:07.974 ] 00:10:07.974 18:41:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.974 18:41:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:07.974 18:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:07.974 18:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:07.974 18:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:07.974 18:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:07.974 18:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:07.974 18:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:07.974 18:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:07.974 18:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:07.974 18:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:07.974 18:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:07.974 18:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:07.974 18:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:07.974 18:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.974 18:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:07.974 18:41:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.974 18:41:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.974 18:41:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.974 18:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:07.974 "name": "Existed_Raid", 00:10:07.974 "uuid": "d4ccabfa-bbeb-4118-9a44-61c1fc65c921", 00:10:07.974 "strip_size_kb": 0, 00:10:07.974 "state": "configuring", 00:10:07.974 "raid_level": "raid1", 00:10:07.974 "superblock": true, 00:10:07.974 "num_base_bdevs": 4, 00:10:07.974 "num_base_bdevs_discovered": 3, 00:10:07.974 "num_base_bdevs_operational": 4, 00:10:07.974 "base_bdevs_list": [ 00:10:07.974 { 00:10:07.974 "name": "BaseBdev1", 00:10:07.974 "uuid": "aa990e0e-7d15-4d7f-8ff5-f27fced2d167", 00:10:07.974 "is_configured": true, 00:10:07.974 "data_offset": 2048, 00:10:07.974 "data_size": 63488 00:10:07.974 }, 00:10:07.974 { 00:10:07.974 "name": "BaseBdev2", 00:10:07.974 "uuid": "609290ff-79c4-4337-9427-24b76959e8f3", 00:10:07.974 "is_configured": true, 00:10:07.974 "data_offset": 2048, 00:10:07.974 "data_size": 63488 00:10:07.974 }, 00:10:07.974 { 00:10:07.974 "name": "BaseBdev3", 00:10:07.974 "uuid": "f4176fc9-87d0-43eb-989e-e062c182eeec", 00:10:07.974 "is_configured": true, 00:10:07.974 "data_offset": 2048, 00:10:07.974 "data_size": 63488 00:10:07.974 }, 00:10:07.974 { 00:10:07.974 "name": "BaseBdev4", 00:10:07.974 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:07.974 "is_configured": false, 00:10:07.974 "data_offset": 0, 00:10:07.974 "data_size": 0 00:10:07.974 } 00:10:07.974 ] 00:10:07.974 }' 00:10:07.974 18:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:07.974 18:41:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.544 18:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:08.544 18:41:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.544 18:41:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.544 [2024-12-15 18:41:08.783378] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:08.544 [2024-12-15 18:41:08.783685] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:10:08.544 [2024-12-15 18:41:08.783742] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:08.544 [2024-12-15 18:41:08.784042] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:08.544 [2024-12-15 18:41:08.784232] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:10:08.544 [2024-12-15 18:41:08.784280] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:10:08.544 [2024-12-15 18:41:08.784441] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:08.544 BaseBdev4 00:10:08.544 18:41:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.544 18:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:08.544 18:41:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:08.544 18:41:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:08.544 18:41:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:08.544 18:41:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:08.544 18:41:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:08.544 18:41:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:08.544 18:41:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.544 18:41:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.544 18:41:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.544 18:41:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:08.544 18:41:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.544 18:41:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.544 [ 00:10:08.544 { 00:10:08.544 "name": "BaseBdev4", 00:10:08.544 "aliases": [ 00:10:08.544 "043c1596-e173-4692-8e8b-d229c7196d58" 00:10:08.544 ], 00:10:08.544 "product_name": "Malloc disk", 00:10:08.544 "block_size": 512, 00:10:08.544 "num_blocks": 65536, 00:10:08.544 "uuid": "043c1596-e173-4692-8e8b-d229c7196d58", 00:10:08.544 "assigned_rate_limits": { 00:10:08.544 "rw_ios_per_sec": 0, 00:10:08.544 "rw_mbytes_per_sec": 0, 00:10:08.544 "r_mbytes_per_sec": 0, 00:10:08.544 "w_mbytes_per_sec": 0 00:10:08.544 }, 00:10:08.544 "claimed": true, 00:10:08.544 "claim_type": "exclusive_write", 00:10:08.544 "zoned": false, 00:10:08.544 "supported_io_types": { 00:10:08.544 "read": true, 00:10:08.544 "write": true, 00:10:08.544 "unmap": true, 00:10:08.544 "flush": true, 00:10:08.544 "reset": true, 00:10:08.544 "nvme_admin": false, 00:10:08.544 "nvme_io": false, 00:10:08.544 "nvme_io_md": false, 00:10:08.544 "write_zeroes": true, 00:10:08.544 "zcopy": true, 00:10:08.544 "get_zone_info": false, 00:10:08.544 "zone_management": false, 00:10:08.544 "zone_append": false, 00:10:08.544 "compare": false, 00:10:08.544 "compare_and_write": false, 00:10:08.544 "abort": true, 00:10:08.544 "seek_hole": false, 00:10:08.544 "seek_data": false, 00:10:08.544 "copy": true, 00:10:08.544 "nvme_iov_md": false 00:10:08.544 }, 00:10:08.544 "memory_domains": [ 00:10:08.544 { 00:10:08.544 "dma_device_id": "system", 00:10:08.544 "dma_device_type": 1 00:10:08.544 }, 00:10:08.544 { 00:10:08.544 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:08.544 "dma_device_type": 2 00:10:08.544 } 00:10:08.544 ], 00:10:08.544 "driver_specific": {} 00:10:08.544 } 00:10:08.544 ] 00:10:08.544 18:41:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.544 18:41:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:08.544 18:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:08.544 18:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:08.544 18:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:10:08.544 18:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:08.544 18:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:08.544 18:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:08.544 18:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:08.544 18:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:08.544 18:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:08.544 18:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:08.544 18:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:08.544 18:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:08.544 18:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.544 18:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:08.544 18:41:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.544 18:41:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.544 18:41:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.544 18:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:08.545 "name": "Existed_Raid", 00:10:08.545 "uuid": "d4ccabfa-bbeb-4118-9a44-61c1fc65c921", 00:10:08.545 "strip_size_kb": 0, 00:10:08.545 "state": "online", 00:10:08.545 "raid_level": "raid1", 00:10:08.545 "superblock": true, 00:10:08.545 "num_base_bdevs": 4, 00:10:08.545 "num_base_bdevs_discovered": 4, 00:10:08.545 "num_base_bdevs_operational": 4, 00:10:08.545 "base_bdevs_list": [ 00:10:08.545 { 00:10:08.545 "name": "BaseBdev1", 00:10:08.545 "uuid": "aa990e0e-7d15-4d7f-8ff5-f27fced2d167", 00:10:08.545 "is_configured": true, 00:10:08.545 "data_offset": 2048, 00:10:08.545 "data_size": 63488 00:10:08.545 }, 00:10:08.545 { 00:10:08.545 "name": "BaseBdev2", 00:10:08.545 "uuid": "609290ff-79c4-4337-9427-24b76959e8f3", 00:10:08.545 "is_configured": true, 00:10:08.545 "data_offset": 2048, 00:10:08.545 "data_size": 63488 00:10:08.545 }, 00:10:08.545 { 00:10:08.545 "name": "BaseBdev3", 00:10:08.545 "uuid": "f4176fc9-87d0-43eb-989e-e062c182eeec", 00:10:08.545 "is_configured": true, 00:10:08.545 "data_offset": 2048, 00:10:08.545 "data_size": 63488 00:10:08.545 }, 00:10:08.545 { 00:10:08.545 "name": "BaseBdev4", 00:10:08.545 "uuid": "043c1596-e173-4692-8e8b-d229c7196d58", 00:10:08.545 "is_configured": true, 00:10:08.545 "data_offset": 2048, 00:10:08.545 "data_size": 63488 00:10:08.545 } 00:10:08.545 ] 00:10:08.545 }' 00:10:08.545 18:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:08.545 18:41:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.114 18:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:09.114 18:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:09.114 18:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:09.114 18:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:09.114 18:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:09.114 18:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:09.114 18:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:09.114 18:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:09.114 18:41:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.114 18:41:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.114 [2024-12-15 18:41:09.290927] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:09.114 18:41:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.114 18:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:09.114 "name": "Existed_Raid", 00:10:09.114 "aliases": [ 00:10:09.114 "d4ccabfa-bbeb-4118-9a44-61c1fc65c921" 00:10:09.114 ], 00:10:09.114 "product_name": "Raid Volume", 00:10:09.114 "block_size": 512, 00:10:09.114 "num_blocks": 63488, 00:10:09.114 "uuid": "d4ccabfa-bbeb-4118-9a44-61c1fc65c921", 00:10:09.114 "assigned_rate_limits": { 00:10:09.114 "rw_ios_per_sec": 0, 00:10:09.114 "rw_mbytes_per_sec": 0, 00:10:09.114 "r_mbytes_per_sec": 0, 00:10:09.114 "w_mbytes_per_sec": 0 00:10:09.114 }, 00:10:09.114 "claimed": false, 00:10:09.114 "zoned": false, 00:10:09.114 "supported_io_types": { 00:10:09.114 "read": true, 00:10:09.114 "write": true, 00:10:09.114 "unmap": false, 00:10:09.114 "flush": false, 00:10:09.114 "reset": true, 00:10:09.114 "nvme_admin": false, 00:10:09.114 "nvme_io": false, 00:10:09.114 "nvme_io_md": false, 00:10:09.114 "write_zeroes": true, 00:10:09.114 "zcopy": false, 00:10:09.114 "get_zone_info": false, 00:10:09.114 "zone_management": false, 00:10:09.114 "zone_append": false, 00:10:09.114 "compare": false, 00:10:09.114 "compare_and_write": false, 00:10:09.114 "abort": false, 00:10:09.114 "seek_hole": false, 00:10:09.114 "seek_data": false, 00:10:09.114 "copy": false, 00:10:09.114 "nvme_iov_md": false 00:10:09.114 }, 00:10:09.114 "memory_domains": [ 00:10:09.114 { 00:10:09.114 "dma_device_id": "system", 00:10:09.114 "dma_device_type": 1 00:10:09.114 }, 00:10:09.114 { 00:10:09.114 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:09.114 "dma_device_type": 2 00:10:09.114 }, 00:10:09.114 { 00:10:09.114 "dma_device_id": "system", 00:10:09.114 "dma_device_type": 1 00:10:09.114 }, 00:10:09.114 { 00:10:09.114 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:09.114 "dma_device_type": 2 00:10:09.114 }, 00:10:09.114 { 00:10:09.114 "dma_device_id": "system", 00:10:09.114 "dma_device_type": 1 00:10:09.114 }, 00:10:09.114 { 00:10:09.114 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:09.114 "dma_device_type": 2 00:10:09.114 }, 00:10:09.114 { 00:10:09.114 "dma_device_id": "system", 00:10:09.114 "dma_device_type": 1 00:10:09.114 }, 00:10:09.114 { 00:10:09.114 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:09.114 "dma_device_type": 2 00:10:09.114 } 00:10:09.114 ], 00:10:09.114 "driver_specific": { 00:10:09.114 "raid": { 00:10:09.114 "uuid": "d4ccabfa-bbeb-4118-9a44-61c1fc65c921", 00:10:09.114 "strip_size_kb": 0, 00:10:09.114 "state": "online", 00:10:09.114 "raid_level": "raid1", 00:10:09.114 "superblock": true, 00:10:09.114 "num_base_bdevs": 4, 00:10:09.114 "num_base_bdevs_discovered": 4, 00:10:09.114 "num_base_bdevs_operational": 4, 00:10:09.114 "base_bdevs_list": [ 00:10:09.114 { 00:10:09.114 "name": "BaseBdev1", 00:10:09.114 "uuid": "aa990e0e-7d15-4d7f-8ff5-f27fced2d167", 00:10:09.114 "is_configured": true, 00:10:09.114 "data_offset": 2048, 00:10:09.114 "data_size": 63488 00:10:09.114 }, 00:10:09.114 { 00:10:09.114 "name": "BaseBdev2", 00:10:09.114 "uuid": "609290ff-79c4-4337-9427-24b76959e8f3", 00:10:09.114 "is_configured": true, 00:10:09.114 "data_offset": 2048, 00:10:09.114 "data_size": 63488 00:10:09.114 }, 00:10:09.114 { 00:10:09.114 "name": "BaseBdev3", 00:10:09.114 "uuid": "f4176fc9-87d0-43eb-989e-e062c182eeec", 00:10:09.114 "is_configured": true, 00:10:09.114 "data_offset": 2048, 00:10:09.114 "data_size": 63488 00:10:09.114 }, 00:10:09.114 { 00:10:09.114 "name": "BaseBdev4", 00:10:09.114 "uuid": "043c1596-e173-4692-8e8b-d229c7196d58", 00:10:09.114 "is_configured": true, 00:10:09.114 "data_offset": 2048, 00:10:09.114 "data_size": 63488 00:10:09.114 } 00:10:09.114 ] 00:10:09.114 } 00:10:09.114 } 00:10:09.114 }' 00:10:09.114 18:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:09.114 18:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:09.114 BaseBdev2 00:10:09.114 BaseBdev3 00:10:09.114 BaseBdev4' 00:10:09.115 18:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:09.115 18:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:09.115 18:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:09.115 18:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:09.115 18:41:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.115 18:41:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.115 18:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:09.115 18:41:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.115 18:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:09.115 18:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:09.115 18:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:09.115 18:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:09.115 18:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:09.115 18:41:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.115 18:41:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.115 18:41:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.115 18:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:09.115 18:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:09.115 18:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:09.115 18:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:09.115 18:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:09.115 18:41:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.115 18:41:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.115 18:41:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.374 18:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:09.374 18:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:09.374 18:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:09.374 18:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:09.374 18:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:09.374 18:41:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.374 18:41:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.374 18:41:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.374 18:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:09.374 18:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:09.374 18:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:09.374 18:41:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.374 18:41:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.374 [2024-12-15 18:41:09.618039] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:09.374 18:41:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.374 18:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:09.374 18:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:10:09.374 18:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:09.374 18:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:10:09.374 18:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:10:09.374 18:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:10:09.374 18:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:09.374 18:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:09.374 18:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:09.374 18:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:09.374 18:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:09.374 18:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:09.374 18:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:09.374 18:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:09.374 18:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:09.374 18:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:09.374 18:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.374 18:41:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.374 18:41:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.374 18:41:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.374 18:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:09.374 "name": "Existed_Raid", 00:10:09.374 "uuid": "d4ccabfa-bbeb-4118-9a44-61c1fc65c921", 00:10:09.374 "strip_size_kb": 0, 00:10:09.374 "state": "online", 00:10:09.374 "raid_level": "raid1", 00:10:09.374 "superblock": true, 00:10:09.374 "num_base_bdevs": 4, 00:10:09.374 "num_base_bdevs_discovered": 3, 00:10:09.374 "num_base_bdevs_operational": 3, 00:10:09.374 "base_bdevs_list": [ 00:10:09.374 { 00:10:09.374 "name": null, 00:10:09.375 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:09.375 "is_configured": false, 00:10:09.375 "data_offset": 0, 00:10:09.375 "data_size": 63488 00:10:09.375 }, 00:10:09.375 { 00:10:09.375 "name": "BaseBdev2", 00:10:09.375 "uuid": "609290ff-79c4-4337-9427-24b76959e8f3", 00:10:09.375 "is_configured": true, 00:10:09.375 "data_offset": 2048, 00:10:09.375 "data_size": 63488 00:10:09.375 }, 00:10:09.375 { 00:10:09.375 "name": "BaseBdev3", 00:10:09.375 "uuid": "f4176fc9-87d0-43eb-989e-e062c182eeec", 00:10:09.375 "is_configured": true, 00:10:09.375 "data_offset": 2048, 00:10:09.375 "data_size": 63488 00:10:09.375 }, 00:10:09.375 { 00:10:09.375 "name": "BaseBdev4", 00:10:09.375 "uuid": "043c1596-e173-4692-8e8b-d229c7196d58", 00:10:09.375 "is_configured": true, 00:10:09.375 "data_offset": 2048, 00:10:09.375 "data_size": 63488 00:10:09.375 } 00:10:09.375 ] 00:10:09.375 }' 00:10:09.375 18:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:09.375 18:41:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.634 18:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:09.895 18:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:09.895 18:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:09.895 18:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.895 18:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.895 18:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.895 18:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.895 18:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:09.895 18:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:09.895 18:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:09.895 18:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.895 18:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.895 [2024-12-15 18:41:10.112595] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:09.895 18:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.895 18:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:09.895 18:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:09.895 18:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.895 18:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:09.895 18:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.895 18:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.895 18:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.895 18:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:09.895 18:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:09.895 18:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:09.895 18:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.895 18:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.895 [2024-12-15 18:41:10.183853] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:09.895 18:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.895 18:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:09.895 18:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:09.895 18:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.895 18:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.895 18:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:09.895 18:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.895 18:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.895 18:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:09.895 18:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:09.895 18:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:09.895 18:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.895 18:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.895 [2024-12-15 18:41:10.255053] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:09.895 [2024-12-15 18:41:10.255216] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:09.895 [2024-12-15 18:41:10.266995] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:09.895 [2024-12-15 18:41:10.267126] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:09.895 [2024-12-15 18:41:10.267144] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:10:09.895 18:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.895 18:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:09.895 18:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:09.895 18:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.895 18:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:09.895 18:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.895 18:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.895 18:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.895 18:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:09.895 18:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:09.895 18:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:09.895 18:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:09.895 18:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:09.895 18:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:09.895 18:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.895 18:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.156 BaseBdev2 00:10:10.156 18:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.156 18:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:10.156 18:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:10.156 18:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:10.156 18:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:10.156 18:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:10.156 18:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:10.156 18:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:10.156 18:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.156 18:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.156 18:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.156 18:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:10.156 18:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.156 18:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.156 [ 00:10:10.156 { 00:10:10.156 "name": "BaseBdev2", 00:10:10.156 "aliases": [ 00:10:10.156 "3270d003-cc6a-4405-93c7-1ae65fb3eb33" 00:10:10.156 ], 00:10:10.156 "product_name": "Malloc disk", 00:10:10.156 "block_size": 512, 00:10:10.156 "num_blocks": 65536, 00:10:10.156 "uuid": "3270d003-cc6a-4405-93c7-1ae65fb3eb33", 00:10:10.156 "assigned_rate_limits": { 00:10:10.156 "rw_ios_per_sec": 0, 00:10:10.156 "rw_mbytes_per_sec": 0, 00:10:10.156 "r_mbytes_per_sec": 0, 00:10:10.156 "w_mbytes_per_sec": 0 00:10:10.156 }, 00:10:10.156 "claimed": false, 00:10:10.156 "zoned": false, 00:10:10.156 "supported_io_types": { 00:10:10.156 "read": true, 00:10:10.156 "write": true, 00:10:10.156 "unmap": true, 00:10:10.156 "flush": true, 00:10:10.156 "reset": true, 00:10:10.156 "nvme_admin": false, 00:10:10.156 "nvme_io": false, 00:10:10.156 "nvme_io_md": false, 00:10:10.156 "write_zeroes": true, 00:10:10.156 "zcopy": true, 00:10:10.156 "get_zone_info": false, 00:10:10.156 "zone_management": false, 00:10:10.156 "zone_append": false, 00:10:10.156 "compare": false, 00:10:10.156 "compare_and_write": false, 00:10:10.156 "abort": true, 00:10:10.156 "seek_hole": false, 00:10:10.156 "seek_data": false, 00:10:10.156 "copy": true, 00:10:10.156 "nvme_iov_md": false 00:10:10.156 }, 00:10:10.156 "memory_domains": [ 00:10:10.156 { 00:10:10.156 "dma_device_id": "system", 00:10:10.156 "dma_device_type": 1 00:10:10.156 }, 00:10:10.156 { 00:10:10.156 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:10.156 "dma_device_type": 2 00:10:10.156 } 00:10:10.156 ], 00:10:10.156 "driver_specific": {} 00:10:10.156 } 00:10:10.156 ] 00:10:10.156 18:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.156 18:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:10.156 18:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:10.156 18:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:10.156 18:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:10.156 18:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.156 18:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.156 BaseBdev3 00:10:10.156 18:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.156 18:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:10.156 18:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:10.156 18:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:10.156 18:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:10.156 18:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:10.156 18:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:10.156 18:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:10.156 18:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.156 18:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.157 18:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.157 18:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:10.157 18:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.157 18:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.157 [ 00:10:10.157 { 00:10:10.157 "name": "BaseBdev3", 00:10:10.157 "aliases": [ 00:10:10.157 "a1093eac-a292-445f-af0e-c167c43ff385" 00:10:10.157 ], 00:10:10.157 "product_name": "Malloc disk", 00:10:10.157 "block_size": 512, 00:10:10.157 "num_blocks": 65536, 00:10:10.157 "uuid": "a1093eac-a292-445f-af0e-c167c43ff385", 00:10:10.157 "assigned_rate_limits": { 00:10:10.157 "rw_ios_per_sec": 0, 00:10:10.157 "rw_mbytes_per_sec": 0, 00:10:10.157 "r_mbytes_per_sec": 0, 00:10:10.157 "w_mbytes_per_sec": 0 00:10:10.157 }, 00:10:10.157 "claimed": false, 00:10:10.157 "zoned": false, 00:10:10.157 "supported_io_types": { 00:10:10.157 "read": true, 00:10:10.157 "write": true, 00:10:10.157 "unmap": true, 00:10:10.157 "flush": true, 00:10:10.157 "reset": true, 00:10:10.157 "nvme_admin": false, 00:10:10.157 "nvme_io": false, 00:10:10.157 "nvme_io_md": false, 00:10:10.157 "write_zeroes": true, 00:10:10.157 "zcopy": true, 00:10:10.157 "get_zone_info": false, 00:10:10.157 "zone_management": false, 00:10:10.157 "zone_append": false, 00:10:10.157 "compare": false, 00:10:10.157 "compare_and_write": false, 00:10:10.157 "abort": true, 00:10:10.157 "seek_hole": false, 00:10:10.157 "seek_data": false, 00:10:10.157 "copy": true, 00:10:10.157 "nvme_iov_md": false 00:10:10.157 }, 00:10:10.157 "memory_domains": [ 00:10:10.157 { 00:10:10.157 "dma_device_id": "system", 00:10:10.157 "dma_device_type": 1 00:10:10.157 }, 00:10:10.157 { 00:10:10.157 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:10.157 "dma_device_type": 2 00:10:10.157 } 00:10:10.157 ], 00:10:10.157 "driver_specific": {} 00:10:10.157 } 00:10:10.157 ] 00:10:10.157 18:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.157 18:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:10.157 18:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:10.157 18:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:10.157 18:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:10.157 18:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.157 18:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.157 BaseBdev4 00:10:10.157 18:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.157 18:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:10.157 18:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:10.157 18:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:10.157 18:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:10.157 18:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:10.157 18:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:10.157 18:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:10.157 18:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.157 18:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.157 18:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.157 18:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:10.157 18:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.157 18:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.157 [ 00:10:10.157 { 00:10:10.157 "name": "BaseBdev4", 00:10:10.157 "aliases": [ 00:10:10.157 "fd916615-dffc-4881-a636-731f5001c640" 00:10:10.157 ], 00:10:10.157 "product_name": "Malloc disk", 00:10:10.157 "block_size": 512, 00:10:10.157 "num_blocks": 65536, 00:10:10.157 "uuid": "fd916615-dffc-4881-a636-731f5001c640", 00:10:10.157 "assigned_rate_limits": { 00:10:10.157 "rw_ios_per_sec": 0, 00:10:10.157 "rw_mbytes_per_sec": 0, 00:10:10.157 "r_mbytes_per_sec": 0, 00:10:10.157 "w_mbytes_per_sec": 0 00:10:10.157 }, 00:10:10.157 "claimed": false, 00:10:10.157 "zoned": false, 00:10:10.157 "supported_io_types": { 00:10:10.157 "read": true, 00:10:10.157 "write": true, 00:10:10.157 "unmap": true, 00:10:10.157 "flush": true, 00:10:10.157 "reset": true, 00:10:10.157 "nvme_admin": false, 00:10:10.157 "nvme_io": false, 00:10:10.157 "nvme_io_md": false, 00:10:10.157 "write_zeroes": true, 00:10:10.157 "zcopy": true, 00:10:10.157 "get_zone_info": false, 00:10:10.157 "zone_management": false, 00:10:10.157 "zone_append": false, 00:10:10.157 "compare": false, 00:10:10.157 "compare_and_write": false, 00:10:10.157 "abort": true, 00:10:10.157 "seek_hole": false, 00:10:10.157 "seek_data": false, 00:10:10.157 "copy": true, 00:10:10.157 "nvme_iov_md": false 00:10:10.157 }, 00:10:10.157 "memory_domains": [ 00:10:10.157 { 00:10:10.157 "dma_device_id": "system", 00:10:10.157 "dma_device_type": 1 00:10:10.157 }, 00:10:10.157 { 00:10:10.157 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:10.157 "dma_device_type": 2 00:10:10.157 } 00:10:10.157 ], 00:10:10.157 "driver_specific": {} 00:10:10.157 } 00:10:10.157 ] 00:10:10.157 18:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.157 18:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:10.157 18:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:10.157 18:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:10.157 18:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:10.157 18:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.157 18:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.157 [2024-12-15 18:41:10.483523] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:10.157 [2024-12-15 18:41:10.483609] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:10.157 [2024-12-15 18:41:10.483647] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:10.157 [2024-12-15 18:41:10.485543] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:10.157 [2024-12-15 18:41:10.485629] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:10.157 18:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.157 18:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:10.157 18:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:10.157 18:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:10.157 18:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:10.157 18:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:10.157 18:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:10.157 18:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:10.157 18:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:10.157 18:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:10.157 18:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:10.157 18:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.157 18:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.157 18:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.157 18:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:10.157 18:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.157 18:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:10.157 "name": "Existed_Raid", 00:10:10.158 "uuid": "bb4bb2c9-908e-4622-a595-77bac7db1f43", 00:10:10.158 "strip_size_kb": 0, 00:10:10.158 "state": "configuring", 00:10:10.158 "raid_level": "raid1", 00:10:10.158 "superblock": true, 00:10:10.158 "num_base_bdevs": 4, 00:10:10.158 "num_base_bdevs_discovered": 3, 00:10:10.158 "num_base_bdevs_operational": 4, 00:10:10.158 "base_bdevs_list": [ 00:10:10.158 { 00:10:10.158 "name": "BaseBdev1", 00:10:10.158 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:10.158 "is_configured": false, 00:10:10.158 "data_offset": 0, 00:10:10.158 "data_size": 0 00:10:10.158 }, 00:10:10.158 { 00:10:10.158 "name": "BaseBdev2", 00:10:10.158 "uuid": "3270d003-cc6a-4405-93c7-1ae65fb3eb33", 00:10:10.158 "is_configured": true, 00:10:10.158 "data_offset": 2048, 00:10:10.158 "data_size": 63488 00:10:10.158 }, 00:10:10.158 { 00:10:10.158 "name": "BaseBdev3", 00:10:10.158 "uuid": "a1093eac-a292-445f-af0e-c167c43ff385", 00:10:10.158 "is_configured": true, 00:10:10.158 "data_offset": 2048, 00:10:10.158 "data_size": 63488 00:10:10.158 }, 00:10:10.158 { 00:10:10.158 "name": "BaseBdev4", 00:10:10.158 "uuid": "fd916615-dffc-4881-a636-731f5001c640", 00:10:10.158 "is_configured": true, 00:10:10.158 "data_offset": 2048, 00:10:10.158 "data_size": 63488 00:10:10.158 } 00:10:10.158 ] 00:10:10.158 }' 00:10:10.158 18:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:10.158 18:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.727 18:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:10.727 18:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.727 18:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.727 [2024-12-15 18:41:10.946818] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:10.727 18:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.727 18:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:10.727 18:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:10.727 18:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:10.727 18:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:10.727 18:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:10.727 18:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:10.727 18:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:10.727 18:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:10.727 18:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:10.727 18:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:10.727 18:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.727 18:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:10.727 18:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.727 18:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.727 18:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.727 18:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:10.727 "name": "Existed_Raid", 00:10:10.727 "uuid": "bb4bb2c9-908e-4622-a595-77bac7db1f43", 00:10:10.727 "strip_size_kb": 0, 00:10:10.727 "state": "configuring", 00:10:10.727 "raid_level": "raid1", 00:10:10.727 "superblock": true, 00:10:10.727 "num_base_bdevs": 4, 00:10:10.727 "num_base_bdevs_discovered": 2, 00:10:10.727 "num_base_bdevs_operational": 4, 00:10:10.727 "base_bdevs_list": [ 00:10:10.727 { 00:10:10.727 "name": "BaseBdev1", 00:10:10.727 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:10.727 "is_configured": false, 00:10:10.727 "data_offset": 0, 00:10:10.727 "data_size": 0 00:10:10.727 }, 00:10:10.727 { 00:10:10.727 "name": null, 00:10:10.727 "uuid": "3270d003-cc6a-4405-93c7-1ae65fb3eb33", 00:10:10.727 "is_configured": false, 00:10:10.727 "data_offset": 0, 00:10:10.727 "data_size": 63488 00:10:10.727 }, 00:10:10.727 { 00:10:10.727 "name": "BaseBdev3", 00:10:10.727 "uuid": "a1093eac-a292-445f-af0e-c167c43ff385", 00:10:10.727 "is_configured": true, 00:10:10.727 "data_offset": 2048, 00:10:10.727 "data_size": 63488 00:10:10.727 }, 00:10:10.727 { 00:10:10.727 "name": "BaseBdev4", 00:10:10.727 "uuid": "fd916615-dffc-4881-a636-731f5001c640", 00:10:10.727 "is_configured": true, 00:10:10.727 "data_offset": 2048, 00:10:10.727 "data_size": 63488 00:10:10.727 } 00:10:10.727 ] 00:10:10.727 }' 00:10:10.727 18:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:10.727 18:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.987 18:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.987 18:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:10.987 18:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.987 18:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.987 18:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.248 18:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:11.248 18:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:11.248 18:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.248 18:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.248 [2024-12-15 18:41:11.444946] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:11.248 BaseBdev1 00:10:11.248 18:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.248 18:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:11.248 18:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:11.248 18:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:11.248 18:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:11.248 18:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:11.248 18:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:11.248 18:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:11.248 18:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.248 18:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.248 18:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.248 18:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:11.248 18:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.248 18:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.248 [ 00:10:11.248 { 00:10:11.248 "name": "BaseBdev1", 00:10:11.248 "aliases": [ 00:10:11.248 "9c8108aa-68b5-4bc4-885a-f9e687b1608a" 00:10:11.248 ], 00:10:11.248 "product_name": "Malloc disk", 00:10:11.248 "block_size": 512, 00:10:11.248 "num_blocks": 65536, 00:10:11.248 "uuid": "9c8108aa-68b5-4bc4-885a-f9e687b1608a", 00:10:11.248 "assigned_rate_limits": { 00:10:11.248 "rw_ios_per_sec": 0, 00:10:11.248 "rw_mbytes_per_sec": 0, 00:10:11.248 "r_mbytes_per_sec": 0, 00:10:11.248 "w_mbytes_per_sec": 0 00:10:11.248 }, 00:10:11.248 "claimed": true, 00:10:11.248 "claim_type": "exclusive_write", 00:10:11.248 "zoned": false, 00:10:11.248 "supported_io_types": { 00:10:11.248 "read": true, 00:10:11.248 "write": true, 00:10:11.248 "unmap": true, 00:10:11.248 "flush": true, 00:10:11.248 "reset": true, 00:10:11.248 "nvme_admin": false, 00:10:11.248 "nvme_io": false, 00:10:11.248 "nvme_io_md": false, 00:10:11.248 "write_zeroes": true, 00:10:11.248 "zcopy": true, 00:10:11.248 "get_zone_info": false, 00:10:11.248 "zone_management": false, 00:10:11.248 "zone_append": false, 00:10:11.248 "compare": false, 00:10:11.248 "compare_and_write": false, 00:10:11.248 "abort": true, 00:10:11.248 "seek_hole": false, 00:10:11.248 "seek_data": false, 00:10:11.248 "copy": true, 00:10:11.248 "nvme_iov_md": false 00:10:11.248 }, 00:10:11.248 "memory_domains": [ 00:10:11.248 { 00:10:11.248 "dma_device_id": "system", 00:10:11.248 "dma_device_type": 1 00:10:11.248 }, 00:10:11.248 { 00:10:11.248 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:11.248 "dma_device_type": 2 00:10:11.248 } 00:10:11.248 ], 00:10:11.248 "driver_specific": {} 00:10:11.248 } 00:10:11.248 ] 00:10:11.248 18:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.248 18:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:11.248 18:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:11.248 18:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:11.248 18:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:11.248 18:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:11.248 18:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:11.248 18:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:11.248 18:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:11.248 18:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:11.248 18:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:11.248 18:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:11.248 18:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.248 18:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:11.248 18:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.248 18:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.248 18:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.248 18:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:11.248 "name": "Existed_Raid", 00:10:11.248 "uuid": "bb4bb2c9-908e-4622-a595-77bac7db1f43", 00:10:11.248 "strip_size_kb": 0, 00:10:11.248 "state": "configuring", 00:10:11.248 "raid_level": "raid1", 00:10:11.248 "superblock": true, 00:10:11.248 "num_base_bdevs": 4, 00:10:11.248 "num_base_bdevs_discovered": 3, 00:10:11.248 "num_base_bdevs_operational": 4, 00:10:11.248 "base_bdevs_list": [ 00:10:11.248 { 00:10:11.248 "name": "BaseBdev1", 00:10:11.248 "uuid": "9c8108aa-68b5-4bc4-885a-f9e687b1608a", 00:10:11.248 "is_configured": true, 00:10:11.248 "data_offset": 2048, 00:10:11.248 "data_size": 63488 00:10:11.248 }, 00:10:11.248 { 00:10:11.248 "name": null, 00:10:11.248 "uuid": "3270d003-cc6a-4405-93c7-1ae65fb3eb33", 00:10:11.248 "is_configured": false, 00:10:11.248 "data_offset": 0, 00:10:11.248 "data_size": 63488 00:10:11.248 }, 00:10:11.248 { 00:10:11.248 "name": "BaseBdev3", 00:10:11.248 "uuid": "a1093eac-a292-445f-af0e-c167c43ff385", 00:10:11.248 "is_configured": true, 00:10:11.248 "data_offset": 2048, 00:10:11.248 "data_size": 63488 00:10:11.248 }, 00:10:11.248 { 00:10:11.248 "name": "BaseBdev4", 00:10:11.248 "uuid": "fd916615-dffc-4881-a636-731f5001c640", 00:10:11.248 "is_configured": true, 00:10:11.248 "data_offset": 2048, 00:10:11.248 "data_size": 63488 00:10:11.248 } 00:10:11.248 ] 00:10:11.248 }' 00:10:11.248 18:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:11.248 18:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.508 18:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:11.508 18:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.508 18:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.508 18:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.508 18:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.769 18:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:11.769 18:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:11.769 18:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.769 18:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.769 [2024-12-15 18:41:11.964120] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:11.769 18:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.769 18:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:11.769 18:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:11.769 18:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:11.769 18:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:11.769 18:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:11.769 18:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:11.769 18:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:11.769 18:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:11.769 18:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:11.769 18:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:11.769 18:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.769 18:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:11.769 18:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.769 18:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.769 18:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.769 18:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:11.769 "name": "Existed_Raid", 00:10:11.769 "uuid": "bb4bb2c9-908e-4622-a595-77bac7db1f43", 00:10:11.769 "strip_size_kb": 0, 00:10:11.769 "state": "configuring", 00:10:11.769 "raid_level": "raid1", 00:10:11.769 "superblock": true, 00:10:11.769 "num_base_bdevs": 4, 00:10:11.769 "num_base_bdevs_discovered": 2, 00:10:11.769 "num_base_bdevs_operational": 4, 00:10:11.769 "base_bdevs_list": [ 00:10:11.769 { 00:10:11.769 "name": "BaseBdev1", 00:10:11.769 "uuid": "9c8108aa-68b5-4bc4-885a-f9e687b1608a", 00:10:11.769 "is_configured": true, 00:10:11.769 "data_offset": 2048, 00:10:11.769 "data_size": 63488 00:10:11.769 }, 00:10:11.769 { 00:10:11.769 "name": null, 00:10:11.769 "uuid": "3270d003-cc6a-4405-93c7-1ae65fb3eb33", 00:10:11.769 "is_configured": false, 00:10:11.769 "data_offset": 0, 00:10:11.769 "data_size": 63488 00:10:11.769 }, 00:10:11.769 { 00:10:11.769 "name": null, 00:10:11.769 "uuid": "a1093eac-a292-445f-af0e-c167c43ff385", 00:10:11.769 "is_configured": false, 00:10:11.769 "data_offset": 0, 00:10:11.769 "data_size": 63488 00:10:11.769 }, 00:10:11.769 { 00:10:11.769 "name": "BaseBdev4", 00:10:11.769 "uuid": "fd916615-dffc-4881-a636-731f5001c640", 00:10:11.769 "is_configured": true, 00:10:11.769 "data_offset": 2048, 00:10:11.769 "data_size": 63488 00:10:11.769 } 00:10:11.769 ] 00:10:11.769 }' 00:10:11.769 18:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:11.769 18:41:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.029 18:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:12.029 18:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.029 18:41:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.029 18:41:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.290 18:41:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.290 18:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:12.290 18:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:12.290 18:41:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.290 18:41:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.290 [2024-12-15 18:41:12.511263] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:12.290 18:41:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.290 18:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:12.290 18:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:12.290 18:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:12.290 18:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:12.290 18:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:12.290 18:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:12.290 18:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:12.290 18:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:12.290 18:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:12.290 18:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:12.291 18:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.291 18:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:12.291 18:41:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.291 18:41:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.291 18:41:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.291 18:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:12.291 "name": "Existed_Raid", 00:10:12.291 "uuid": "bb4bb2c9-908e-4622-a595-77bac7db1f43", 00:10:12.291 "strip_size_kb": 0, 00:10:12.291 "state": "configuring", 00:10:12.291 "raid_level": "raid1", 00:10:12.291 "superblock": true, 00:10:12.291 "num_base_bdevs": 4, 00:10:12.291 "num_base_bdevs_discovered": 3, 00:10:12.291 "num_base_bdevs_operational": 4, 00:10:12.291 "base_bdevs_list": [ 00:10:12.291 { 00:10:12.291 "name": "BaseBdev1", 00:10:12.291 "uuid": "9c8108aa-68b5-4bc4-885a-f9e687b1608a", 00:10:12.291 "is_configured": true, 00:10:12.291 "data_offset": 2048, 00:10:12.291 "data_size": 63488 00:10:12.291 }, 00:10:12.291 { 00:10:12.291 "name": null, 00:10:12.291 "uuid": "3270d003-cc6a-4405-93c7-1ae65fb3eb33", 00:10:12.291 "is_configured": false, 00:10:12.291 "data_offset": 0, 00:10:12.291 "data_size": 63488 00:10:12.291 }, 00:10:12.291 { 00:10:12.291 "name": "BaseBdev3", 00:10:12.291 "uuid": "a1093eac-a292-445f-af0e-c167c43ff385", 00:10:12.291 "is_configured": true, 00:10:12.291 "data_offset": 2048, 00:10:12.291 "data_size": 63488 00:10:12.291 }, 00:10:12.291 { 00:10:12.291 "name": "BaseBdev4", 00:10:12.291 "uuid": "fd916615-dffc-4881-a636-731f5001c640", 00:10:12.291 "is_configured": true, 00:10:12.291 "data_offset": 2048, 00:10:12.291 "data_size": 63488 00:10:12.291 } 00:10:12.291 ] 00:10:12.291 }' 00:10:12.291 18:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:12.291 18:41:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.551 18:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.551 18:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:12.551 18:41:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.551 18:41:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.551 18:41:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.812 18:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:12.812 18:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:12.812 18:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.812 18:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.812 [2024-12-15 18:41:13.010505] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:12.812 18:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.812 18:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:12.812 18:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:12.812 18:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:12.812 18:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:12.812 18:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:12.812 18:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:12.812 18:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:12.812 18:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:12.812 18:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:12.812 18:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:12.812 18:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.812 18:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.812 18:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.812 18:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:12.812 18:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.812 18:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:12.812 "name": "Existed_Raid", 00:10:12.812 "uuid": "bb4bb2c9-908e-4622-a595-77bac7db1f43", 00:10:12.812 "strip_size_kb": 0, 00:10:12.812 "state": "configuring", 00:10:12.812 "raid_level": "raid1", 00:10:12.812 "superblock": true, 00:10:12.812 "num_base_bdevs": 4, 00:10:12.812 "num_base_bdevs_discovered": 2, 00:10:12.812 "num_base_bdevs_operational": 4, 00:10:12.812 "base_bdevs_list": [ 00:10:12.812 { 00:10:12.812 "name": null, 00:10:12.812 "uuid": "9c8108aa-68b5-4bc4-885a-f9e687b1608a", 00:10:12.812 "is_configured": false, 00:10:12.812 "data_offset": 0, 00:10:12.812 "data_size": 63488 00:10:12.812 }, 00:10:12.812 { 00:10:12.812 "name": null, 00:10:12.812 "uuid": "3270d003-cc6a-4405-93c7-1ae65fb3eb33", 00:10:12.812 "is_configured": false, 00:10:12.812 "data_offset": 0, 00:10:12.812 "data_size": 63488 00:10:12.812 }, 00:10:12.812 { 00:10:12.812 "name": "BaseBdev3", 00:10:12.812 "uuid": "a1093eac-a292-445f-af0e-c167c43ff385", 00:10:12.812 "is_configured": true, 00:10:12.812 "data_offset": 2048, 00:10:12.812 "data_size": 63488 00:10:12.812 }, 00:10:12.812 { 00:10:12.812 "name": "BaseBdev4", 00:10:12.812 "uuid": "fd916615-dffc-4881-a636-731f5001c640", 00:10:12.812 "is_configured": true, 00:10:12.812 "data_offset": 2048, 00:10:12.812 "data_size": 63488 00:10:12.812 } 00:10:12.812 ] 00:10:12.812 }' 00:10:12.812 18:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:12.812 18:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.072 18:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.072 18:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.072 18:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.072 18:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:13.073 18:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.333 18:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:13.333 18:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:13.333 18:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.333 18:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.333 [2024-12-15 18:41:13.536590] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:13.333 18:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.333 18:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:13.333 18:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:13.333 18:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:13.333 18:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:13.333 18:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:13.333 18:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:13.333 18:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:13.333 18:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:13.333 18:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:13.333 18:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:13.333 18:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.333 18:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:13.333 18:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.333 18:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.333 18:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.333 18:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:13.333 "name": "Existed_Raid", 00:10:13.333 "uuid": "bb4bb2c9-908e-4622-a595-77bac7db1f43", 00:10:13.333 "strip_size_kb": 0, 00:10:13.333 "state": "configuring", 00:10:13.333 "raid_level": "raid1", 00:10:13.333 "superblock": true, 00:10:13.333 "num_base_bdevs": 4, 00:10:13.333 "num_base_bdevs_discovered": 3, 00:10:13.333 "num_base_bdevs_operational": 4, 00:10:13.333 "base_bdevs_list": [ 00:10:13.333 { 00:10:13.333 "name": null, 00:10:13.333 "uuid": "9c8108aa-68b5-4bc4-885a-f9e687b1608a", 00:10:13.333 "is_configured": false, 00:10:13.333 "data_offset": 0, 00:10:13.333 "data_size": 63488 00:10:13.333 }, 00:10:13.333 { 00:10:13.333 "name": "BaseBdev2", 00:10:13.333 "uuid": "3270d003-cc6a-4405-93c7-1ae65fb3eb33", 00:10:13.333 "is_configured": true, 00:10:13.333 "data_offset": 2048, 00:10:13.333 "data_size": 63488 00:10:13.333 }, 00:10:13.333 { 00:10:13.333 "name": "BaseBdev3", 00:10:13.333 "uuid": "a1093eac-a292-445f-af0e-c167c43ff385", 00:10:13.333 "is_configured": true, 00:10:13.333 "data_offset": 2048, 00:10:13.333 "data_size": 63488 00:10:13.333 }, 00:10:13.333 { 00:10:13.333 "name": "BaseBdev4", 00:10:13.333 "uuid": "fd916615-dffc-4881-a636-731f5001c640", 00:10:13.333 "is_configured": true, 00:10:13.333 "data_offset": 2048, 00:10:13.333 "data_size": 63488 00:10:13.333 } 00:10:13.333 ] 00:10:13.333 }' 00:10:13.333 18:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:13.333 18:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.594 18:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.594 18:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.594 18:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.594 18:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:13.594 18:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.594 18:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:13.855 18:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.855 18:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:13.855 18:41:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.855 18:41:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.855 18:41:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.855 18:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 9c8108aa-68b5-4bc4-885a-f9e687b1608a 00:10:13.855 18:41:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.855 18:41:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.855 [2024-12-15 18:41:14.090613] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:13.855 NewBaseBdev 00:10:13.855 [2024-12-15 18:41:14.090914] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:10:13.855 [2024-12-15 18:41:14.090935] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:13.855 [2024-12-15 18:41:14.091199] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:13.855 [2024-12-15 18:41:14.091319] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:10:13.855 [2024-12-15 18:41:14.091329] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:10:13.855 [2024-12-15 18:41:14.091422] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:13.855 18:41:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.855 18:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:13.855 18:41:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:13.855 18:41:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:13.855 18:41:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:13.855 18:41:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:13.855 18:41:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:13.855 18:41:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:13.855 18:41:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.855 18:41:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.855 18:41:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.855 18:41:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:13.855 18:41:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.855 18:41:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.855 [ 00:10:13.855 { 00:10:13.855 "name": "NewBaseBdev", 00:10:13.855 "aliases": [ 00:10:13.855 "9c8108aa-68b5-4bc4-885a-f9e687b1608a" 00:10:13.855 ], 00:10:13.855 "product_name": "Malloc disk", 00:10:13.855 "block_size": 512, 00:10:13.855 "num_blocks": 65536, 00:10:13.855 "uuid": "9c8108aa-68b5-4bc4-885a-f9e687b1608a", 00:10:13.855 "assigned_rate_limits": { 00:10:13.855 "rw_ios_per_sec": 0, 00:10:13.855 "rw_mbytes_per_sec": 0, 00:10:13.855 "r_mbytes_per_sec": 0, 00:10:13.855 "w_mbytes_per_sec": 0 00:10:13.855 }, 00:10:13.855 "claimed": true, 00:10:13.855 "claim_type": "exclusive_write", 00:10:13.855 "zoned": false, 00:10:13.855 "supported_io_types": { 00:10:13.855 "read": true, 00:10:13.855 "write": true, 00:10:13.855 "unmap": true, 00:10:13.855 "flush": true, 00:10:13.855 "reset": true, 00:10:13.855 "nvme_admin": false, 00:10:13.855 "nvme_io": false, 00:10:13.855 "nvme_io_md": false, 00:10:13.855 "write_zeroes": true, 00:10:13.855 "zcopy": true, 00:10:13.856 "get_zone_info": false, 00:10:13.856 "zone_management": false, 00:10:13.856 "zone_append": false, 00:10:13.856 "compare": false, 00:10:13.856 "compare_and_write": false, 00:10:13.856 "abort": true, 00:10:13.856 "seek_hole": false, 00:10:13.856 "seek_data": false, 00:10:13.856 "copy": true, 00:10:13.856 "nvme_iov_md": false 00:10:13.856 }, 00:10:13.856 "memory_domains": [ 00:10:13.856 { 00:10:13.856 "dma_device_id": "system", 00:10:13.856 "dma_device_type": 1 00:10:13.856 }, 00:10:13.856 { 00:10:13.856 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:13.856 "dma_device_type": 2 00:10:13.856 } 00:10:13.856 ], 00:10:13.856 "driver_specific": {} 00:10:13.856 } 00:10:13.856 ] 00:10:13.856 18:41:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.856 18:41:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:13.856 18:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:10:13.856 18:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:13.856 18:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:13.856 18:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:13.856 18:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:13.856 18:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:13.856 18:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:13.856 18:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:13.856 18:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:13.856 18:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:13.856 18:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.856 18:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:13.856 18:41:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.856 18:41:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.856 18:41:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.856 18:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:13.856 "name": "Existed_Raid", 00:10:13.856 "uuid": "bb4bb2c9-908e-4622-a595-77bac7db1f43", 00:10:13.856 "strip_size_kb": 0, 00:10:13.856 "state": "online", 00:10:13.856 "raid_level": "raid1", 00:10:13.856 "superblock": true, 00:10:13.856 "num_base_bdevs": 4, 00:10:13.856 "num_base_bdevs_discovered": 4, 00:10:13.856 "num_base_bdevs_operational": 4, 00:10:13.856 "base_bdevs_list": [ 00:10:13.856 { 00:10:13.856 "name": "NewBaseBdev", 00:10:13.856 "uuid": "9c8108aa-68b5-4bc4-885a-f9e687b1608a", 00:10:13.856 "is_configured": true, 00:10:13.856 "data_offset": 2048, 00:10:13.856 "data_size": 63488 00:10:13.856 }, 00:10:13.856 { 00:10:13.856 "name": "BaseBdev2", 00:10:13.856 "uuid": "3270d003-cc6a-4405-93c7-1ae65fb3eb33", 00:10:13.856 "is_configured": true, 00:10:13.856 "data_offset": 2048, 00:10:13.856 "data_size": 63488 00:10:13.856 }, 00:10:13.856 { 00:10:13.856 "name": "BaseBdev3", 00:10:13.856 "uuid": "a1093eac-a292-445f-af0e-c167c43ff385", 00:10:13.856 "is_configured": true, 00:10:13.856 "data_offset": 2048, 00:10:13.856 "data_size": 63488 00:10:13.856 }, 00:10:13.856 { 00:10:13.856 "name": "BaseBdev4", 00:10:13.856 "uuid": "fd916615-dffc-4881-a636-731f5001c640", 00:10:13.856 "is_configured": true, 00:10:13.856 "data_offset": 2048, 00:10:13.856 "data_size": 63488 00:10:13.856 } 00:10:13.856 ] 00:10:13.856 }' 00:10:13.856 18:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:13.856 18:41:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.427 18:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:14.427 18:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:14.427 18:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:14.427 18:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:14.427 18:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:14.427 18:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:14.427 18:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:14.427 18:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:14.427 18:41:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.427 18:41:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.427 [2024-12-15 18:41:14.578174] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:14.427 18:41:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.427 18:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:14.427 "name": "Existed_Raid", 00:10:14.427 "aliases": [ 00:10:14.427 "bb4bb2c9-908e-4622-a595-77bac7db1f43" 00:10:14.427 ], 00:10:14.427 "product_name": "Raid Volume", 00:10:14.427 "block_size": 512, 00:10:14.427 "num_blocks": 63488, 00:10:14.427 "uuid": "bb4bb2c9-908e-4622-a595-77bac7db1f43", 00:10:14.427 "assigned_rate_limits": { 00:10:14.427 "rw_ios_per_sec": 0, 00:10:14.427 "rw_mbytes_per_sec": 0, 00:10:14.428 "r_mbytes_per_sec": 0, 00:10:14.428 "w_mbytes_per_sec": 0 00:10:14.428 }, 00:10:14.428 "claimed": false, 00:10:14.428 "zoned": false, 00:10:14.428 "supported_io_types": { 00:10:14.428 "read": true, 00:10:14.428 "write": true, 00:10:14.428 "unmap": false, 00:10:14.428 "flush": false, 00:10:14.428 "reset": true, 00:10:14.428 "nvme_admin": false, 00:10:14.428 "nvme_io": false, 00:10:14.428 "nvme_io_md": false, 00:10:14.428 "write_zeroes": true, 00:10:14.428 "zcopy": false, 00:10:14.428 "get_zone_info": false, 00:10:14.428 "zone_management": false, 00:10:14.428 "zone_append": false, 00:10:14.428 "compare": false, 00:10:14.428 "compare_and_write": false, 00:10:14.428 "abort": false, 00:10:14.428 "seek_hole": false, 00:10:14.428 "seek_data": false, 00:10:14.428 "copy": false, 00:10:14.428 "nvme_iov_md": false 00:10:14.428 }, 00:10:14.428 "memory_domains": [ 00:10:14.428 { 00:10:14.428 "dma_device_id": "system", 00:10:14.428 "dma_device_type": 1 00:10:14.428 }, 00:10:14.428 { 00:10:14.428 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:14.428 "dma_device_type": 2 00:10:14.428 }, 00:10:14.428 { 00:10:14.428 "dma_device_id": "system", 00:10:14.428 "dma_device_type": 1 00:10:14.428 }, 00:10:14.428 { 00:10:14.428 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:14.428 "dma_device_type": 2 00:10:14.428 }, 00:10:14.428 { 00:10:14.428 "dma_device_id": "system", 00:10:14.428 "dma_device_type": 1 00:10:14.428 }, 00:10:14.428 { 00:10:14.428 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:14.428 "dma_device_type": 2 00:10:14.428 }, 00:10:14.428 { 00:10:14.428 "dma_device_id": "system", 00:10:14.428 "dma_device_type": 1 00:10:14.428 }, 00:10:14.428 { 00:10:14.428 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:14.428 "dma_device_type": 2 00:10:14.428 } 00:10:14.428 ], 00:10:14.428 "driver_specific": { 00:10:14.428 "raid": { 00:10:14.428 "uuid": "bb4bb2c9-908e-4622-a595-77bac7db1f43", 00:10:14.428 "strip_size_kb": 0, 00:10:14.428 "state": "online", 00:10:14.428 "raid_level": "raid1", 00:10:14.428 "superblock": true, 00:10:14.428 "num_base_bdevs": 4, 00:10:14.428 "num_base_bdevs_discovered": 4, 00:10:14.428 "num_base_bdevs_operational": 4, 00:10:14.428 "base_bdevs_list": [ 00:10:14.428 { 00:10:14.428 "name": "NewBaseBdev", 00:10:14.428 "uuid": "9c8108aa-68b5-4bc4-885a-f9e687b1608a", 00:10:14.428 "is_configured": true, 00:10:14.428 "data_offset": 2048, 00:10:14.428 "data_size": 63488 00:10:14.428 }, 00:10:14.428 { 00:10:14.428 "name": "BaseBdev2", 00:10:14.428 "uuid": "3270d003-cc6a-4405-93c7-1ae65fb3eb33", 00:10:14.428 "is_configured": true, 00:10:14.428 "data_offset": 2048, 00:10:14.428 "data_size": 63488 00:10:14.428 }, 00:10:14.428 { 00:10:14.428 "name": "BaseBdev3", 00:10:14.428 "uuid": "a1093eac-a292-445f-af0e-c167c43ff385", 00:10:14.428 "is_configured": true, 00:10:14.428 "data_offset": 2048, 00:10:14.428 "data_size": 63488 00:10:14.428 }, 00:10:14.428 { 00:10:14.428 "name": "BaseBdev4", 00:10:14.428 "uuid": "fd916615-dffc-4881-a636-731f5001c640", 00:10:14.428 "is_configured": true, 00:10:14.428 "data_offset": 2048, 00:10:14.428 "data_size": 63488 00:10:14.428 } 00:10:14.428 ] 00:10:14.428 } 00:10:14.428 } 00:10:14.428 }' 00:10:14.428 18:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:14.428 18:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:14.428 BaseBdev2 00:10:14.428 BaseBdev3 00:10:14.428 BaseBdev4' 00:10:14.428 18:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:14.428 18:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:14.428 18:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:14.428 18:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:14.428 18:41:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.428 18:41:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.428 18:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:14.428 18:41:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.428 18:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:14.428 18:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:14.428 18:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:14.428 18:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:14.428 18:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:14.428 18:41:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.428 18:41:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.428 18:41:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.428 18:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:14.428 18:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:14.428 18:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:14.428 18:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:14.428 18:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:14.428 18:41:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.428 18:41:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.428 18:41:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.689 18:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:14.689 18:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:14.689 18:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:14.689 18:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:14.689 18:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:14.689 18:41:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.689 18:41:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.689 18:41:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.689 18:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:14.689 18:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:14.689 18:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:14.689 18:41:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.689 18:41:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.689 [2024-12-15 18:41:14.929268] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:14.689 [2024-12-15 18:41:14.929400] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:14.689 [2024-12-15 18:41:14.929543] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:14.689 [2024-12-15 18:41:14.929852] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:14.689 [2024-12-15 18:41:14.929914] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:10:14.689 18:41:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.689 18:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 86552 00:10:14.689 18:41:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 86552 ']' 00:10:14.689 18:41:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 86552 00:10:14.689 18:41:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:10:14.689 18:41:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:14.689 18:41:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86552 00:10:14.689 18:41:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:14.689 18:41:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:14.689 18:41:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86552' 00:10:14.689 killing process with pid 86552 00:10:14.689 18:41:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 86552 00:10:14.689 [2024-12-15 18:41:14.977721] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:14.689 18:41:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 86552 00:10:14.689 [2024-12-15 18:41:15.019946] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:14.949 18:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:14.949 00:10:14.949 real 0m9.867s 00:10:14.949 user 0m16.789s 00:10:14.949 sys 0m2.137s 00:10:14.949 ************************************ 00:10:14.949 END TEST raid_state_function_test_sb 00:10:14.949 ************************************ 00:10:14.949 18:41:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:14.949 18:41:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.949 18:41:15 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:10:14.949 18:41:15 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:14.949 18:41:15 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:14.949 18:41:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:14.949 ************************************ 00:10:14.949 START TEST raid_superblock_test 00:10:14.949 ************************************ 00:10:14.949 18:41:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 4 00:10:14.949 18:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:10:14.949 18:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:10:14.949 18:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:14.949 18:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:14.949 18:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:14.949 18:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:14.949 18:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:14.949 18:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:14.949 18:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:14.949 18:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:14.950 18:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:14.950 18:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:14.950 18:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:14.950 18:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:10:14.950 18:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:10:14.950 18:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:14.950 18:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=87204 00:10:14.950 18:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 87204 00:10:14.950 18:41:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 87204 ']' 00:10:14.950 18:41:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:14.950 18:41:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:14.950 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:14.950 18:41:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:14.950 18:41:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:14.950 18:41:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.209 [2024-12-15 18:41:15.408086] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:10:15.209 [2024-12-15 18:41:15.408224] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87204 ] 00:10:15.209 [2024-12-15 18:41:15.563081] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:15.209 [2024-12-15 18:41:15.589680] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:15.209 [2024-12-15 18:41:15.633002] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:15.209 [2024-12-15 18:41:15.633051] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:16.148 18:41:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:16.148 18:41:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:10:16.148 18:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:16.148 18:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:16.148 18:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:16.148 18:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:16.148 18:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:16.148 18:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:16.148 18:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:16.149 18:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:16.149 18:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:16.149 18:41:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.149 18:41:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.149 malloc1 00:10:16.149 18:41:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.149 18:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:16.149 18:41:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.149 18:41:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.149 [2024-12-15 18:41:16.268872] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:16.149 [2024-12-15 18:41:16.269022] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:16.149 [2024-12-15 18:41:16.269068] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:16.149 [2024-12-15 18:41:16.269103] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:16.149 [2024-12-15 18:41:16.271151] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:16.149 [2024-12-15 18:41:16.271229] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:16.149 pt1 00:10:16.149 18:41:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.149 18:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:16.149 18:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:16.149 18:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:16.149 18:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:16.149 18:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:16.149 18:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:16.149 18:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:16.149 18:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:16.149 18:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:16.149 18:41:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.149 18:41:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.149 malloc2 00:10:16.149 18:41:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.149 18:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:16.149 18:41:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.149 18:41:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.149 [2024-12-15 18:41:16.301412] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:16.149 [2024-12-15 18:41:16.301473] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:16.149 [2024-12-15 18:41:16.301490] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:16.149 [2024-12-15 18:41:16.301501] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:16.149 [2024-12-15 18:41:16.303593] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:16.149 [2024-12-15 18:41:16.303635] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:16.149 pt2 00:10:16.149 18:41:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.149 18:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:16.149 18:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:16.149 18:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:10:16.149 18:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:10:16.149 18:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:10:16.149 18:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:16.149 18:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:16.149 18:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:16.149 18:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:10:16.149 18:41:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.149 18:41:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.149 malloc3 00:10:16.149 18:41:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.149 18:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:16.149 18:41:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.149 18:41:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.149 [2024-12-15 18:41:16.330423] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:16.149 [2024-12-15 18:41:16.330569] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:16.149 [2024-12-15 18:41:16.330609] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:16.149 [2024-12-15 18:41:16.330641] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:16.149 [2024-12-15 18:41:16.332712] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:16.149 [2024-12-15 18:41:16.332791] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:16.149 pt3 00:10:16.149 18:41:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.149 18:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:16.149 18:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:16.149 18:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:10:16.149 18:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:10:16.149 18:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:10:16.149 18:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:16.149 18:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:16.149 18:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:16.149 18:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:10:16.149 18:41:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.149 18:41:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.149 malloc4 00:10:16.149 18:41:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.149 18:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:16.149 18:41:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.149 18:41:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.149 [2024-12-15 18:41:16.370383] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:16.149 [2024-12-15 18:41:16.370516] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:16.149 [2024-12-15 18:41:16.370553] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:16.149 [2024-12-15 18:41:16.370587] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:16.149 [2024-12-15 18:41:16.372697] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:16.149 [2024-12-15 18:41:16.372781] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:16.149 pt4 00:10:16.149 18:41:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.149 18:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:16.149 18:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:16.149 18:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:10:16.149 18:41:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.149 18:41:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.149 [2024-12-15 18:41:16.382417] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:16.149 [2024-12-15 18:41:16.384238] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:16.149 [2024-12-15 18:41:16.384299] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:16.149 [2024-12-15 18:41:16.384362] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:16.149 [2024-12-15 18:41:16.384557] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:10:16.149 [2024-12-15 18:41:16.384573] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:16.149 [2024-12-15 18:41:16.384884] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:16.150 [2024-12-15 18:41:16.385048] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:10:16.150 [2024-12-15 18:41:16.385073] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:10:16.150 [2024-12-15 18:41:16.385220] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:16.150 18:41:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.150 18:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:10:16.150 18:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:16.150 18:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:16.150 18:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:16.150 18:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:16.150 18:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:16.150 18:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:16.150 18:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:16.150 18:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:16.150 18:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:16.150 18:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.150 18:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:16.150 18:41:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.150 18:41:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.150 18:41:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.150 18:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:16.150 "name": "raid_bdev1", 00:10:16.150 "uuid": "d50db7bd-da20-4be5-ae03-b0c1926c1a8e", 00:10:16.150 "strip_size_kb": 0, 00:10:16.150 "state": "online", 00:10:16.150 "raid_level": "raid1", 00:10:16.150 "superblock": true, 00:10:16.150 "num_base_bdevs": 4, 00:10:16.150 "num_base_bdevs_discovered": 4, 00:10:16.150 "num_base_bdevs_operational": 4, 00:10:16.150 "base_bdevs_list": [ 00:10:16.150 { 00:10:16.150 "name": "pt1", 00:10:16.150 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:16.150 "is_configured": true, 00:10:16.150 "data_offset": 2048, 00:10:16.150 "data_size": 63488 00:10:16.150 }, 00:10:16.150 { 00:10:16.150 "name": "pt2", 00:10:16.150 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:16.150 "is_configured": true, 00:10:16.150 "data_offset": 2048, 00:10:16.150 "data_size": 63488 00:10:16.150 }, 00:10:16.150 { 00:10:16.150 "name": "pt3", 00:10:16.150 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:16.150 "is_configured": true, 00:10:16.150 "data_offset": 2048, 00:10:16.150 "data_size": 63488 00:10:16.150 }, 00:10:16.150 { 00:10:16.150 "name": "pt4", 00:10:16.150 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:16.150 "is_configured": true, 00:10:16.150 "data_offset": 2048, 00:10:16.150 "data_size": 63488 00:10:16.150 } 00:10:16.150 ] 00:10:16.150 }' 00:10:16.150 18:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:16.150 18:41:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.410 18:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:16.410 18:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:16.410 18:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:16.410 18:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:16.410 18:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:16.410 18:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:16.410 18:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:16.410 18:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:16.410 18:41:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.410 18:41:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.410 [2024-12-15 18:41:16.825969] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:16.410 18:41:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.671 18:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:16.671 "name": "raid_bdev1", 00:10:16.671 "aliases": [ 00:10:16.671 "d50db7bd-da20-4be5-ae03-b0c1926c1a8e" 00:10:16.671 ], 00:10:16.671 "product_name": "Raid Volume", 00:10:16.671 "block_size": 512, 00:10:16.671 "num_blocks": 63488, 00:10:16.671 "uuid": "d50db7bd-da20-4be5-ae03-b0c1926c1a8e", 00:10:16.671 "assigned_rate_limits": { 00:10:16.671 "rw_ios_per_sec": 0, 00:10:16.671 "rw_mbytes_per_sec": 0, 00:10:16.671 "r_mbytes_per_sec": 0, 00:10:16.671 "w_mbytes_per_sec": 0 00:10:16.671 }, 00:10:16.671 "claimed": false, 00:10:16.671 "zoned": false, 00:10:16.671 "supported_io_types": { 00:10:16.671 "read": true, 00:10:16.671 "write": true, 00:10:16.671 "unmap": false, 00:10:16.671 "flush": false, 00:10:16.671 "reset": true, 00:10:16.671 "nvme_admin": false, 00:10:16.671 "nvme_io": false, 00:10:16.671 "nvme_io_md": false, 00:10:16.671 "write_zeroes": true, 00:10:16.671 "zcopy": false, 00:10:16.671 "get_zone_info": false, 00:10:16.671 "zone_management": false, 00:10:16.671 "zone_append": false, 00:10:16.671 "compare": false, 00:10:16.671 "compare_and_write": false, 00:10:16.671 "abort": false, 00:10:16.671 "seek_hole": false, 00:10:16.671 "seek_data": false, 00:10:16.671 "copy": false, 00:10:16.671 "nvme_iov_md": false 00:10:16.671 }, 00:10:16.671 "memory_domains": [ 00:10:16.671 { 00:10:16.671 "dma_device_id": "system", 00:10:16.671 "dma_device_type": 1 00:10:16.671 }, 00:10:16.671 { 00:10:16.671 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:16.671 "dma_device_type": 2 00:10:16.671 }, 00:10:16.671 { 00:10:16.671 "dma_device_id": "system", 00:10:16.671 "dma_device_type": 1 00:10:16.671 }, 00:10:16.671 { 00:10:16.671 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:16.671 "dma_device_type": 2 00:10:16.671 }, 00:10:16.671 { 00:10:16.671 "dma_device_id": "system", 00:10:16.671 "dma_device_type": 1 00:10:16.671 }, 00:10:16.671 { 00:10:16.671 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:16.671 "dma_device_type": 2 00:10:16.671 }, 00:10:16.671 { 00:10:16.671 "dma_device_id": "system", 00:10:16.671 "dma_device_type": 1 00:10:16.671 }, 00:10:16.671 { 00:10:16.671 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:16.671 "dma_device_type": 2 00:10:16.671 } 00:10:16.671 ], 00:10:16.671 "driver_specific": { 00:10:16.671 "raid": { 00:10:16.671 "uuid": "d50db7bd-da20-4be5-ae03-b0c1926c1a8e", 00:10:16.671 "strip_size_kb": 0, 00:10:16.671 "state": "online", 00:10:16.671 "raid_level": "raid1", 00:10:16.671 "superblock": true, 00:10:16.671 "num_base_bdevs": 4, 00:10:16.671 "num_base_bdevs_discovered": 4, 00:10:16.671 "num_base_bdevs_operational": 4, 00:10:16.671 "base_bdevs_list": [ 00:10:16.671 { 00:10:16.671 "name": "pt1", 00:10:16.671 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:16.671 "is_configured": true, 00:10:16.671 "data_offset": 2048, 00:10:16.671 "data_size": 63488 00:10:16.671 }, 00:10:16.671 { 00:10:16.671 "name": "pt2", 00:10:16.671 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:16.671 "is_configured": true, 00:10:16.672 "data_offset": 2048, 00:10:16.672 "data_size": 63488 00:10:16.672 }, 00:10:16.672 { 00:10:16.672 "name": "pt3", 00:10:16.672 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:16.672 "is_configured": true, 00:10:16.672 "data_offset": 2048, 00:10:16.672 "data_size": 63488 00:10:16.672 }, 00:10:16.672 { 00:10:16.672 "name": "pt4", 00:10:16.672 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:16.672 "is_configured": true, 00:10:16.672 "data_offset": 2048, 00:10:16.672 "data_size": 63488 00:10:16.672 } 00:10:16.672 ] 00:10:16.672 } 00:10:16.672 } 00:10:16.672 }' 00:10:16.672 18:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:16.672 18:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:16.672 pt2 00:10:16.672 pt3 00:10:16.672 pt4' 00:10:16.672 18:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:16.672 18:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:16.672 18:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:16.672 18:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:16.672 18:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:16.672 18:41:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.672 18:41:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.672 18:41:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.672 18:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:16.672 18:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:16.672 18:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:16.672 18:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:16.672 18:41:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.672 18:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:16.672 18:41:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.672 18:41:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.672 18:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:16.672 18:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:16.672 18:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:16.672 18:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:16.672 18:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.672 18:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.672 18:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:16.672 18:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.672 18:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:16.672 18:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:16.672 18:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:16.672 18:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:10:16.672 18:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.672 18:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:16.672 18:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.672 18:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.933 18:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:16.933 18:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:16.933 18:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:16.933 18:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:16.933 18:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.933 18:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.933 [2024-12-15 18:41:17.125391] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:16.933 18:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.933 18:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=d50db7bd-da20-4be5-ae03-b0c1926c1a8e 00:10:16.933 18:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z d50db7bd-da20-4be5-ae03-b0c1926c1a8e ']' 00:10:16.933 18:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:16.933 18:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.933 18:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.933 [2024-12-15 18:41:17.157045] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:16.933 [2024-12-15 18:41:17.157076] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:16.933 [2024-12-15 18:41:17.157153] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:16.933 [2024-12-15 18:41:17.157242] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:16.933 [2024-12-15 18:41:17.157252] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:10:16.933 18:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.933 18:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.933 18:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:16.933 18:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.933 18:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.933 18:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.933 18:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:16.933 18:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:16.933 18:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:16.933 18:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:16.933 18:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.933 18:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.933 18:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.933 18:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:16.933 18:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:16.933 18:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.933 18:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.933 18:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.933 18:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:16.933 18:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:10:16.933 18:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.933 18:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.933 18:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.933 18:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:16.933 18:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:10:16.933 18:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.933 18:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.933 18:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.933 18:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:16.933 18:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.933 18:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.933 18:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:16.933 18:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.933 18:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:16.933 18:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:16.933 18:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:10:16.933 18:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:16.933 18:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:10:16.933 18:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:16.933 18:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:10:16.933 18:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:16.933 18:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:16.933 18:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.933 18:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.933 [2024-12-15 18:41:17.328799] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:16.933 [2024-12-15 18:41:17.330624] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:16.933 [2024-12-15 18:41:17.330676] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:10:16.933 [2024-12-15 18:41:17.330704] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:10:16.933 [2024-12-15 18:41:17.330748] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:16.933 [2024-12-15 18:41:17.330788] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:16.933 [2024-12-15 18:41:17.330817] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:10:16.933 [2024-12-15 18:41:17.330833] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:10:16.933 [2024-12-15 18:41:17.330846] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:16.933 [2024-12-15 18:41:17.330854] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:10:16.933 request: 00:10:16.933 { 00:10:16.933 "name": "raid_bdev1", 00:10:16.933 "raid_level": "raid1", 00:10:16.933 "base_bdevs": [ 00:10:16.933 "malloc1", 00:10:16.933 "malloc2", 00:10:16.933 "malloc3", 00:10:16.933 "malloc4" 00:10:16.933 ], 00:10:16.933 "superblock": false, 00:10:16.933 "method": "bdev_raid_create", 00:10:16.933 "req_id": 1 00:10:16.933 } 00:10:16.933 Got JSON-RPC error response 00:10:16.933 response: 00:10:16.933 { 00:10:16.933 "code": -17, 00:10:16.933 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:16.933 } 00:10:16.934 18:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:16.934 18:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:10:16.934 18:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:16.934 18:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:16.934 18:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:16.934 18:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.934 18:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:16.934 18:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.934 18:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.934 18:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.194 18:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:17.194 18:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:17.194 18:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:17.194 18:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.194 18:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.194 [2024-12-15 18:41:17.396659] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:17.194 [2024-12-15 18:41:17.396714] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:17.194 [2024-12-15 18:41:17.396734] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:17.194 [2024-12-15 18:41:17.396755] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:17.194 [2024-12-15 18:41:17.399031] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:17.194 [2024-12-15 18:41:17.399134] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:17.194 [2024-12-15 18:41:17.399212] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:17.194 [2024-12-15 18:41:17.399246] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:17.194 pt1 00:10:17.194 18:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.194 18:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:10:17.194 18:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:17.194 18:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:17.194 18:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:17.194 18:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:17.194 18:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:17.194 18:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:17.194 18:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:17.194 18:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:17.194 18:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:17.194 18:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:17.194 18:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.194 18:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.194 18:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.194 18:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.194 18:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:17.194 "name": "raid_bdev1", 00:10:17.194 "uuid": "d50db7bd-da20-4be5-ae03-b0c1926c1a8e", 00:10:17.194 "strip_size_kb": 0, 00:10:17.194 "state": "configuring", 00:10:17.194 "raid_level": "raid1", 00:10:17.194 "superblock": true, 00:10:17.194 "num_base_bdevs": 4, 00:10:17.194 "num_base_bdevs_discovered": 1, 00:10:17.194 "num_base_bdevs_operational": 4, 00:10:17.194 "base_bdevs_list": [ 00:10:17.194 { 00:10:17.194 "name": "pt1", 00:10:17.194 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:17.194 "is_configured": true, 00:10:17.194 "data_offset": 2048, 00:10:17.194 "data_size": 63488 00:10:17.194 }, 00:10:17.194 { 00:10:17.194 "name": null, 00:10:17.194 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:17.194 "is_configured": false, 00:10:17.194 "data_offset": 2048, 00:10:17.194 "data_size": 63488 00:10:17.194 }, 00:10:17.194 { 00:10:17.194 "name": null, 00:10:17.194 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:17.194 "is_configured": false, 00:10:17.194 "data_offset": 2048, 00:10:17.194 "data_size": 63488 00:10:17.194 }, 00:10:17.194 { 00:10:17.194 "name": null, 00:10:17.194 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:17.194 "is_configured": false, 00:10:17.194 "data_offset": 2048, 00:10:17.194 "data_size": 63488 00:10:17.194 } 00:10:17.194 ] 00:10:17.194 }' 00:10:17.194 18:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:17.194 18:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.455 18:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:10:17.455 18:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:17.455 18:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.455 18:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.455 [2024-12-15 18:41:17.852002] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:17.455 [2024-12-15 18:41:17.852140] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:17.455 [2024-12-15 18:41:17.852182] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:10:17.455 [2024-12-15 18:41:17.852210] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:17.455 [2024-12-15 18:41:17.852652] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:17.455 [2024-12-15 18:41:17.852711] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:17.455 [2024-12-15 18:41:17.852832] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:17.455 [2024-12-15 18:41:17.852883] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:17.455 pt2 00:10:17.455 18:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.455 18:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:10:17.455 18:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.455 18:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.455 [2024-12-15 18:41:17.863965] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:10:17.455 18:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.455 18:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:10:17.455 18:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:17.455 18:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:17.455 18:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:17.455 18:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:17.455 18:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:17.455 18:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:17.455 18:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:17.455 18:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:17.455 18:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:17.455 18:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.455 18:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.455 18:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.455 18:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:17.455 18:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.715 18:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:17.715 "name": "raid_bdev1", 00:10:17.715 "uuid": "d50db7bd-da20-4be5-ae03-b0c1926c1a8e", 00:10:17.715 "strip_size_kb": 0, 00:10:17.715 "state": "configuring", 00:10:17.715 "raid_level": "raid1", 00:10:17.715 "superblock": true, 00:10:17.715 "num_base_bdevs": 4, 00:10:17.715 "num_base_bdevs_discovered": 1, 00:10:17.715 "num_base_bdevs_operational": 4, 00:10:17.715 "base_bdevs_list": [ 00:10:17.715 { 00:10:17.715 "name": "pt1", 00:10:17.715 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:17.715 "is_configured": true, 00:10:17.715 "data_offset": 2048, 00:10:17.715 "data_size": 63488 00:10:17.715 }, 00:10:17.715 { 00:10:17.715 "name": null, 00:10:17.715 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:17.715 "is_configured": false, 00:10:17.715 "data_offset": 0, 00:10:17.715 "data_size": 63488 00:10:17.715 }, 00:10:17.715 { 00:10:17.715 "name": null, 00:10:17.715 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:17.715 "is_configured": false, 00:10:17.715 "data_offset": 2048, 00:10:17.715 "data_size": 63488 00:10:17.715 }, 00:10:17.715 { 00:10:17.715 "name": null, 00:10:17.715 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:17.715 "is_configured": false, 00:10:17.715 "data_offset": 2048, 00:10:17.715 "data_size": 63488 00:10:17.715 } 00:10:17.715 ] 00:10:17.715 }' 00:10:17.715 18:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:17.715 18:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.975 18:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:17.975 18:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:17.975 18:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:17.975 18:41:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.975 18:41:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.975 [2024-12-15 18:41:18.299339] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:17.975 [2024-12-15 18:41:18.299479] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:17.975 [2024-12-15 18:41:18.299514] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:10:17.975 [2024-12-15 18:41:18.299542] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:17.975 [2024-12-15 18:41:18.299970] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:17.975 [2024-12-15 18:41:18.300029] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:17.975 [2024-12-15 18:41:18.300128] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:17.975 [2024-12-15 18:41:18.300179] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:17.975 pt2 00:10:17.975 18:41:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.975 18:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:17.975 18:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:17.975 18:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:17.975 18:41:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.975 18:41:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.975 [2024-12-15 18:41:18.311289] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:17.975 [2024-12-15 18:41:18.311380] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:17.975 [2024-12-15 18:41:18.311410] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:10:17.975 [2024-12-15 18:41:18.311438] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:17.976 [2024-12-15 18:41:18.311756] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:17.976 [2024-12-15 18:41:18.311830] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:17.976 [2024-12-15 18:41:18.311912] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:17.976 [2024-12-15 18:41:18.311967] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:17.976 pt3 00:10:17.976 18:41:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.976 18:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:17.976 18:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:17.976 18:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:17.976 18:41:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.976 18:41:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.976 [2024-12-15 18:41:18.323259] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:17.976 [2024-12-15 18:41:18.323307] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:17.976 [2024-12-15 18:41:18.323320] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:10:17.976 [2024-12-15 18:41:18.323330] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:17.976 [2024-12-15 18:41:18.323621] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:17.976 [2024-12-15 18:41:18.323640] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:17.976 [2024-12-15 18:41:18.323705] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:10:17.976 [2024-12-15 18:41:18.323730] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:17.976 [2024-12-15 18:41:18.323862] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:10:17.976 [2024-12-15 18:41:18.323877] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:17.976 [2024-12-15 18:41:18.324094] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:17.976 [2024-12-15 18:41:18.324211] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:10:17.976 [2024-12-15 18:41:18.324230] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:10:17.976 [2024-12-15 18:41:18.324330] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:17.976 pt4 00:10:17.976 18:41:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.976 18:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:17.976 18:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:17.976 18:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:10:17.976 18:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:17.976 18:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:17.976 18:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:17.976 18:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:17.976 18:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:17.976 18:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:17.976 18:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:17.976 18:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:17.976 18:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:17.976 18:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.976 18:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:17.976 18:41:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.976 18:41:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.976 18:41:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.976 18:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:17.976 "name": "raid_bdev1", 00:10:17.976 "uuid": "d50db7bd-da20-4be5-ae03-b0c1926c1a8e", 00:10:17.976 "strip_size_kb": 0, 00:10:17.976 "state": "online", 00:10:17.976 "raid_level": "raid1", 00:10:17.976 "superblock": true, 00:10:17.976 "num_base_bdevs": 4, 00:10:17.976 "num_base_bdevs_discovered": 4, 00:10:17.976 "num_base_bdevs_operational": 4, 00:10:17.976 "base_bdevs_list": [ 00:10:17.976 { 00:10:17.976 "name": "pt1", 00:10:17.976 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:17.976 "is_configured": true, 00:10:17.976 "data_offset": 2048, 00:10:17.976 "data_size": 63488 00:10:17.976 }, 00:10:17.976 { 00:10:17.976 "name": "pt2", 00:10:17.976 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:17.976 "is_configured": true, 00:10:17.976 "data_offset": 2048, 00:10:17.976 "data_size": 63488 00:10:17.976 }, 00:10:17.976 { 00:10:17.976 "name": "pt3", 00:10:17.976 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:17.976 "is_configured": true, 00:10:17.976 "data_offset": 2048, 00:10:17.976 "data_size": 63488 00:10:17.976 }, 00:10:17.976 { 00:10:17.976 "name": "pt4", 00:10:17.976 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:17.976 "is_configured": true, 00:10:17.976 "data_offset": 2048, 00:10:17.976 "data_size": 63488 00:10:17.976 } 00:10:17.976 ] 00:10:17.976 }' 00:10:17.976 18:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:17.976 18:41:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.546 18:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:18.546 18:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:18.546 18:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:18.546 18:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:18.546 18:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:18.546 18:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:18.546 18:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:18.546 18:41:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.546 18:41:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.546 18:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:18.546 [2024-12-15 18:41:18.774835] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:18.546 18:41:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.546 18:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:18.546 "name": "raid_bdev1", 00:10:18.546 "aliases": [ 00:10:18.546 "d50db7bd-da20-4be5-ae03-b0c1926c1a8e" 00:10:18.547 ], 00:10:18.547 "product_name": "Raid Volume", 00:10:18.547 "block_size": 512, 00:10:18.547 "num_blocks": 63488, 00:10:18.547 "uuid": "d50db7bd-da20-4be5-ae03-b0c1926c1a8e", 00:10:18.547 "assigned_rate_limits": { 00:10:18.547 "rw_ios_per_sec": 0, 00:10:18.547 "rw_mbytes_per_sec": 0, 00:10:18.547 "r_mbytes_per_sec": 0, 00:10:18.547 "w_mbytes_per_sec": 0 00:10:18.547 }, 00:10:18.547 "claimed": false, 00:10:18.547 "zoned": false, 00:10:18.547 "supported_io_types": { 00:10:18.547 "read": true, 00:10:18.547 "write": true, 00:10:18.547 "unmap": false, 00:10:18.547 "flush": false, 00:10:18.547 "reset": true, 00:10:18.547 "nvme_admin": false, 00:10:18.547 "nvme_io": false, 00:10:18.547 "nvme_io_md": false, 00:10:18.547 "write_zeroes": true, 00:10:18.547 "zcopy": false, 00:10:18.547 "get_zone_info": false, 00:10:18.547 "zone_management": false, 00:10:18.547 "zone_append": false, 00:10:18.547 "compare": false, 00:10:18.547 "compare_and_write": false, 00:10:18.547 "abort": false, 00:10:18.547 "seek_hole": false, 00:10:18.547 "seek_data": false, 00:10:18.547 "copy": false, 00:10:18.547 "nvme_iov_md": false 00:10:18.547 }, 00:10:18.547 "memory_domains": [ 00:10:18.547 { 00:10:18.547 "dma_device_id": "system", 00:10:18.547 "dma_device_type": 1 00:10:18.547 }, 00:10:18.547 { 00:10:18.547 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:18.547 "dma_device_type": 2 00:10:18.547 }, 00:10:18.547 { 00:10:18.547 "dma_device_id": "system", 00:10:18.547 "dma_device_type": 1 00:10:18.547 }, 00:10:18.547 { 00:10:18.547 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:18.547 "dma_device_type": 2 00:10:18.547 }, 00:10:18.547 { 00:10:18.547 "dma_device_id": "system", 00:10:18.547 "dma_device_type": 1 00:10:18.547 }, 00:10:18.547 { 00:10:18.547 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:18.547 "dma_device_type": 2 00:10:18.547 }, 00:10:18.547 { 00:10:18.547 "dma_device_id": "system", 00:10:18.547 "dma_device_type": 1 00:10:18.547 }, 00:10:18.547 { 00:10:18.547 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:18.547 "dma_device_type": 2 00:10:18.547 } 00:10:18.547 ], 00:10:18.547 "driver_specific": { 00:10:18.547 "raid": { 00:10:18.547 "uuid": "d50db7bd-da20-4be5-ae03-b0c1926c1a8e", 00:10:18.547 "strip_size_kb": 0, 00:10:18.547 "state": "online", 00:10:18.547 "raid_level": "raid1", 00:10:18.547 "superblock": true, 00:10:18.547 "num_base_bdevs": 4, 00:10:18.547 "num_base_bdevs_discovered": 4, 00:10:18.547 "num_base_bdevs_operational": 4, 00:10:18.547 "base_bdevs_list": [ 00:10:18.547 { 00:10:18.547 "name": "pt1", 00:10:18.547 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:18.547 "is_configured": true, 00:10:18.547 "data_offset": 2048, 00:10:18.547 "data_size": 63488 00:10:18.547 }, 00:10:18.547 { 00:10:18.547 "name": "pt2", 00:10:18.547 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:18.547 "is_configured": true, 00:10:18.547 "data_offset": 2048, 00:10:18.547 "data_size": 63488 00:10:18.547 }, 00:10:18.547 { 00:10:18.547 "name": "pt3", 00:10:18.547 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:18.547 "is_configured": true, 00:10:18.547 "data_offset": 2048, 00:10:18.547 "data_size": 63488 00:10:18.547 }, 00:10:18.547 { 00:10:18.547 "name": "pt4", 00:10:18.547 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:18.547 "is_configured": true, 00:10:18.547 "data_offset": 2048, 00:10:18.547 "data_size": 63488 00:10:18.547 } 00:10:18.547 ] 00:10:18.547 } 00:10:18.547 } 00:10:18.547 }' 00:10:18.547 18:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:18.547 18:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:18.547 pt2 00:10:18.547 pt3 00:10:18.547 pt4' 00:10:18.547 18:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:18.547 18:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:18.547 18:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:18.547 18:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:18.547 18:41:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.547 18:41:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.547 18:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:18.547 18:41:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.547 18:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:18.547 18:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:18.547 18:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:18.547 18:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:18.547 18:41:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.547 18:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:18.547 18:41:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.807 18:41:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.807 18:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:18.807 18:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:18.807 18:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:18.808 18:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:18.808 18:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:18.808 18:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.808 18:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.808 18:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.808 18:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:18.808 18:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:18.808 18:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:18.808 18:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:18.808 18:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:10:18.808 18:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.808 18:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.808 18:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.808 18:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:18.808 18:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:18.808 18:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:18.808 18:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:18.808 18:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.808 18:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.808 [2024-12-15 18:41:19.110211] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:18.808 18:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.808 18:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' d50db7bd-da20-4be5-ae03-b0c1926c1a8e '!=' d50db7bd-da20-4be5-ae03-b0c1926c1a8e ']' 00:10:18.808 18:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:10:18.808 18:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:18.808 18:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:18.808 18:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:10:18.808 18:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.808 18:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.808 [2024-12-15 18:41:19.141891] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:10:18.808 18:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.808 18:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:18.808 18:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:18.808 18:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:18.808 18:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:18.808 18:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:18.808 18:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:18.808 18:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:18.808 18:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:18.808 18:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:18.808 18:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:18.808 18:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:18.808 18:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.808 18:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.808 18:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.808 18:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.808 18:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:18.808 "name": "raid_bdev1", 00:10:18.808 "uuid": "d50db7bd-da20-4be5-ae03-b0c1926c1a8e", 00:10:18.808 "strip_size_kb": 0, 00:10:18.808 "state": "online", 00:10:18.808 "raid_level": "raid1", 00:10:18.808 "superblock": true, 00:10:18.808 "num_base_bdevs": 4, 00:10:18.808 "num_base_bdevs_discovered": 3, 00:10:18.808 "num_base_bdevs_operational": 3, 00:10:18.808 "base_bdevs_list": [ 00:10:18.808 { 00:10:18.808 "name": null, 00:10:18.808 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:18.808 "is_configured": false, 00:10:18.808 "data_offset": 0, 00:10:18.808 "data_size": 63488 00:10:18.808 }, 00:10:18.808 { 00:10:18.808 "name": "pt2", 00:10:18.808 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:18.808 "is_configured": true, 00:10:18.808 "data_offset": 2048, 00:10:18.808 "data_size": 63488 00:10:18.808 }, 00:10:18.808 { 00:10:18.808 "name": "pt3", 00:10:18.808 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:18.808 "is_configured": true, 00:10:18.808 "data_offset": 2048, 00:10:18.808 "data_size": 63488 00:10:18.808 }, 00:10:18.808 { 00:10:18.808 "name": "pt4", 00:10:18.808 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:18.808 "is_configured": true, 00:10:18.808 "data_offset": 2048, 00:10:18.808 "data_size": 63488 00:10:18.808 } 00:10:18.808 ] 00:10:18.808 }' 00:10:18.808 18:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:18.808 18:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.378 18:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:19.378 18:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.378 18:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.378 [2024-12-15 18:41:19.541177] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:19.378 [2024-12-15 18:41:19.541283] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:19.378 [2024-12-15 18:41:19.541386] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:19.378 [2024-12-15 18:41:19.541476] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:19.378 [2024-12-15 18:41:19.541549] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:10:19.378 18:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.378 18:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.378 18:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.378 18:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.378 18:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:10:19.378 18:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.378 18:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:10:19.378 18:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:10:19.378 18:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:10:19.379 18:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:19.379 18:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:10:19.379 18:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.379 18:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.379 18:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.379 18:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:10:19.379 18:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:19.379 18:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:10:19.379 18:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.379 18:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.379 18:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.379 18:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:10:19.379 18:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:19.379 18:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:10:19.379 18:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.379 18:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.379 18:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.379 18:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:10:19.379 18:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:19.379 18:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:10:19.379 18:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:10:19.379 18:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:19.379 18:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.379 18:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.379 [2024-12-15 18:41:19.640979] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:19.379 [2024-12-15 18:41:19.641042] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:19.379 [2024-12-15 18:41:19.641058] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:10:19.379 [2024-12-15 18:41:19.641068] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:19.379 [2024-12-15 18:41:19.643197] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:19.379 [2024-12-15 18:41:19.643310] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:19.379 [2024-12-15 18:41:19.643384] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:19.379 [2024-12-15 18:41:19.643420] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:19.379 pt2 00:10:19.379 18:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.379 18:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:10:19.379 18:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:19.379 18:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:19.379 18:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:19.379 18:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:19.379 18:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:19.379 18:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:19.379 18:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:19.379 18:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:19.379 18:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:19.379 18:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.379 18:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:19.379 18:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.379 18:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.379 18:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.379 18:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:19.379 "name": "raid_bdev1", 00:10:19.379 "uuid": "d50db7bd-da20-4be5-ae03-b0c1926c1a8e", 00:10:19.379 "strip_size_kb": 0, 00:10:19.379 "state": "configuring", 00:10:19.379 "raid_level": "raid1", 00:10:19.379 "superblock": true, 00:10:19.379 "num_base_bdevs": 4, 00:10:19.379 "num_base_bdevs_discovered": 1, 00:10:19.379 "num_base_bdevs_operational": 3, 00:10:19.379 "base_bdevs_list": [ 00:10:19.379 { 00:10:19.379 "name": null, 00:10:19.379 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:19.379 "is_configured": false, 00:10:19.379 "data_offset": 2048, 00:10:19.379 "data_size": 63488 00:10:19.379 }, 00:10:19.379 { 00:10:19.379 "name": "pt2", 00:10:19.379 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:19.379 "is_configured": true, 00:10:19.379 "data_offset": 2048, 00:10:19.379 "data_size": 63488 00:10:19.379 }, 00:10:19.379 { 00:10:19.379 "name": null, 00:10:19.379 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:19.379 "is_configured": false, 00:10:19.379 "data_offset": 2048, 00:10:19.379 "data_size": 63488 00:10:19.379 }, 00:10:19.379 { 00:10:19.379 "name": null, 00:10:19.379 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:19.379 "is_configured": false, 00:10:19.379 "data_offset": 2048, 00:10:19.379 "data_size": 63488 00:10:19.379 } 00:10:19.379 ] 00:10:19.379 }' 00:10:19.379 18:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:19.379 18:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.640 18:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:10:19.640 18:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:10:19.640 18:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:19.640 18:41:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.640 18:41:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.640 [2024-12-15 18:41:20.064486] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:19.640 [2024-12-15 18:41:20.064661] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:19.640 [2024-12-15 18:41:20.064703] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:10:19.640 [2024-12-15 18:41:20.064739] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:19.640 [2024-12-15 18:41:20.065250] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:19.640 [2024-12-15 18:41:20.065319] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:19.640 [2024-12-15 18:41:20.065438] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:19.640 [2024-12-15 18:41:20.065498] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:19.640 pt3 00:10:19.640 18:41:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.640 18:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:10:19.640 18:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:19.640 18:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:19.640 18:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:19.640 18:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:19.640 18:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:19.640 18:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:19.640 18:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:19.640 18:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:19.640 18:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:19.640 18:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.640 18:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:19.900 18:41:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.900 18:41:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.900 18:41:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.900 18:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:19.900 "name": "raid_bdev1", 00:10:19.900 "uuid": "d50db7bd-da20-4be5-ae03-b0c1926c1a8e", 00:10:19.900 "strip_size_kb": 0, 00:10:19.900 "state": "configuring", 00:10:19.900 "raid_level": "raid1", 00:10:19.900 "superblock": true, 00:10:19.900 "num_base_bdevs": 4, 00:10:19.900 "num_base_bdevs_discovered": 2, 00:10:19.900 "num_base_bdevs_operational": 3, 00:10:19.900 "base_bdevs_list": [ 00:10:19.900 { 00:10:19.900 "name": null, 00:10:19.900 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:19.900 "is_configured": false, 00:10:19.900 "data_offset": 2048, 00:10:19.900 "data_size": 63488 00:10:19.900 }, 00:10:19.900 { 00:10:19.900 "name": "pt2", 00:10:19.900 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:19.900 "is_configured": true, 00:10:19.900 "data_offset": 2048, 00:10:19.900 "data_size": 63488 00:10:19.900 }, 00:10:19.900 { 00:10:19.900 "name": "pt3", 00:10:19.900 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:19.900 "is_configured": true, 00:10:19.900 "data_offset": 2048, 00:10:19.900 "data_size": 63488 00:10:19.900 }, 00:10:19.900 { 00:10:19.900 "name": null, 00:10:19.900 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:19.900 "is_configured": false, 00:10:19.900 "data_offset": 2048, 00:10:19.900 "data_size": 63488 00:10:19.900 } 00:10:19.900 ] 00:10:19.900 }' 00:10:19.900 18:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:19.900 18:41:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.167 18:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:10:20.167 18:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:10:20.167 18:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:10:20.167 18:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:20.167 18:41:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.167 18:41:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.167 [2024-12-15 18:41:20.515680] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:20.167 [2024-12-15 18:41:20.515774] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:20.167 [2024-12-15 18:41:20.515797] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:10:20.167 [2024-12-15 18:41:20.515819] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:20.167 [2024-12-15 18:41:20.516231] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:20.167 [2024-12-15 18:41:20.516261] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:20.167 [2024-12-15 18:41:20.516338] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:10:20.167 [2024-12-15 18:41:20.516365] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:20.168 [2024-12-15 18:41:20.516476] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:10:20.168 [2024-12-15 18:41:20.516488] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:20.168 [2024-12-15 18:41:20.516729] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:20.168 [2024-12-15 18:41:20.516884] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:10:20.168 [2024-12-15 18:41:20.516895] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:10:20.168 [2024-12-15 18:41:20.517003] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:20.168 pt4 00:10:20.168 18:41:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.168 18:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:20.168 18:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:20.168 18:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:20.168 18:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:20.168 18:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:20.168 18:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:20.168 18:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:20.168 18:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:20.168 18:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:20.168 18:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:20.168 18:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.168 18:41:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.168 18:41:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.168 18:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:20.168 18:41:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.168 18:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:20.168 "name": "raid_bdev1", 00:10:20.168 "uuid": "d50db7bd-da20-4be5-ae03-b0c1926c1a8e", 00:10:20.168 "strip_size_kb": 0, 00:10:20.168 "state": "online", 00:10:20.168 "raid_level": "raid1", 00:10:20.168 "superblock": true, 00:10:20.168 "num_base_bdevs": 4, 00:10:20.168 "num_base_bdevs_discovered": 3, 00:10:20.168 "num_base_bdevs_operational": 3, 00:10:20.168 "base_bdevs_list": [ 00:10:20.168 { 00:10:20.168 "name": null, 00:10:20.168 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:20.168 "is_configured": false, 00:10:20.168 "data_offset": 2048, 00:10:20.168 "data_size": 63488 00:10:20.168 }, 00:10:20.168 { 00:10:20.168 "name": "pt2", 00:10:20.168 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:20.168 "is_configured": true, 00:10:20.168 "data_offset": 2048, 00:10:20.168 "data_size": 63488 00:10:20.168 }, 00:10:20.168 { 00:10:20.168 "name": "pt3", 00:10:20.168 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:20.168 "is_configured": true, 00:10:20.168 "data_offset": 2048, 00:10:20.168 "data_size": 63488 00:10:20.168 }, 00:10:20.168 { 00:10:20.168 "name": "pt4", 00:10:20.168 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:20.168 "is_configured": true, 00:10:20.168 "data_offset": 2048, 00:10:20.168 "data_size": 63488 00:10:20.168 } 00:10:20.168 ] 00:10:20.168 }' 00:10:20.168 18:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:20.168 18:41:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.736 18:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:20.736 18:41:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.736 18:41:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.736 [2024-12-15 18:41:20.998906] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:20.736 [2024-12-15 18:41:20.998946] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:20.736 [2024-12-15 18:41:20.999028] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:20.736 [2024-12-15 18:41:20.999102] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:20.736 [2024-12-15 18:41:20.999111] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:10:20.737 18:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.737 18:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.737 18:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.737 18:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:10:20.737 18:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.737 18:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.737 18:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:10:20.737 18:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:10:20.737 18:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:10:20.737 18:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:10:20.737 18:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:10:20.737 18:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.737 18:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.737 18:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.737 18:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:20.737 18:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.737 18:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.737 [2024-12-15 18:41:21.054749] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:20.737 [2024-12-15 18:41:21.054824] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:20.737 [2024-12-15 18:41:21.054844] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:10:20.737 [2024-12-15 18:41:21.054853] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:20.737 [2024-12-15 18:41:21.057098] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:20.737 [2024-12-15 18:41:21.057136] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:20.737 [2024-12-15 18:41:21.057203] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:20.737 [2024-12-15 18:41:21.057243] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:20.737 [2024-12-15 18:41:21.057357] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:10:20.737 [2024-12-15 18:41:21.057369] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:20.737 [2024-12-15 18:41:21.057384] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state configuring 00:10:20.737 [2024-12-15 18:41:21.057415] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:20.737 [2024-12-15 18:41:21.057495] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:20.737 pt1 00:10:20.737 18:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.737 18:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:10:20.737 18:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:10:20.737 18:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:20.737 18:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:20.737 18:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:20.737 18:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:20.737 18:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:20.737 18:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:20.737 18:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:20.737 18:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:20.737 18:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:20.737 18:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.737 18:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.737 18:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.737 18:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:20.737 18:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.737 18:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:20.737 "name": "raid_bdev1", 00:10:20.737 "uuid": "d50db7bd-da20-4be5-ae03-b0c1926c1a8e", 00:10:20.737 "strip_size_kb": 0, 00:10:20.737 "state": "configuring", 00:10:20.737 "raid_level": "raid1", 00:10:20.737 "superblock": true, 00:10:20.737 "num_base_bdevs": 4, 00:10:20.737 "num_base_bdevs_discovered": 2, 00:10:20.737 "num_base_bdevs_operational": 3, 00:10:20.737 "base_bdevs_list": [ 00:10:20.737 { 00:10:20.737 "name": null, 00:10:20.737 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:20.737 "is_configured": false, 00:10:20.737 "data_offset": 2048, 00:10:20.737 "data_size": 63488 00:10:20.737 }, 00:10:20.737 { 00:10:20.737 "name": "pt2", 00:10:20.737 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:20.737 "is_configured": true, 00:10:20.737 "data_offset": 2048, 00:10:20.737 "data_size": 63488 00:10:20.737 }, 00:10:20.737 { 00:10:20.737 "name": "pt3", 00:10:20.737 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:20.737 "is_configured": true, 00:10:20.737 "data_offset": 2048, 00:10:20.737 "data_size": 63488 00:10:20.737 }, 00:10:20.737 { 00:10:20.737 "name": null, 00:10:20.737 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:20.737 "is_configured": false, 00:10:20.737 "data_offset": 2048, 00:10:20.737 "data_size": 63488 00:10:20.737 } 00:10:20.737 ] 00:10:20.737 }' 00:10:20.737 18:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:20.737 18:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.307 18:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:10:21.307 18:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.307 18:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.307 18:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:10:21.307 18:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.307 18:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:10:21.307 18:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:21.307 18:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.307 18:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.307 [2024-12-15 18:41:21.557905] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:21.307 [2024-12-15 18:41:21.558052] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:21.307 [2024-12-15 18:41:21.558089] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:10:21.307 [2024-12-15 18:41:21.558119] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:21.307 [2024-12-15 18:41:21.558522] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:21.307 [2024-12-15 18:41:21.558544] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:21.307 [2024-12-15 18:41:21.558609] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:10:21.307 [2024-12-15 18:41:21.558632] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:21.307 [2024-12-15 18:41:21.558727] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:10:21.307 [2024-12-15 18:41:21.558737] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:21.307 [2024-12-15 18:41:21.558975] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:10:21.307 [2024-12-15 18:41:21.559095] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:10:21.307 [2024-12-15 18:41:21.559103] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:10:21.307 [2024-12-15 18:41:21.559207] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:21.307 pt4 00:10:21.307 18:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.307 18:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:21.307 18:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:21.307 18:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:21.307 18:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:21.307 18:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:21.307 18:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:21.307 18:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:21.307 18:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:21.307 18:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:21.307 18:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:21.307 18:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.307 18:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.307 18:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.307 18:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:21.307 18:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.307 18:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:21.307 "name": "raid_bdev1", 00:10:21.307 "uuid": "d50db7bd-da20-4be5-ae03-b0c1926c1a8e", 00:10:21.307 "strip_size_kb": 0, 00:10:21.307 "state": "online", 00:10:21.307 "raid_level": "raid1", 00:10:21.307 "superblock": true, 00:10:21.307 "num_base_bdevs": 4, 00:10:21.307 "num_base_bdevs_discovered": 3, 00:10:21.307 "num_base_bdevs_operational": 3, 00:10:21.307 "base_bdevs_list": [ 00:10:21.307 { 00:10:21.307 "name": null, 00:10:21.307 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:21.307 "is_configured": false, 00:10:21.307 "data_offset": 2048, 00:10:21.307 "data_size": 63488 00:10:21.307 }, 00:10:21.307 { 00:10:21.307 "name": "pt2", 00:10:21.307 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:21.307 "is_configured": true, 00:10:21.307 "data_offset": 2048, 00:10:21.307 "data_size": 63488 00:10:21.307 }, 00:10:21.307 { 00:10:21.307 "name": "pt3", 00:10:21.307 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:21.307 "is_configured": true, 00:10:21.307 "data_offset": 2048, 00:10:21.307 "data_size": 63488 00:10:21.307 }, 00:10:21.307 { 00:10:21.307 "name": "pt4", 00:10:21.307 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:21.307 "is_configured": true, 00:10:21.307 "data_offset": 2048, 00:10:21.307 "data_size": 63488 00:10:21.307 } 00:10:21.307 ] 00:10:21.307 }' 00:10:21.307 18:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:21.307 18:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.875 18:41:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:10:21.875 18:41:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.875 18:41:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.875 18:41:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:10:21.875 18:41:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.875 18:41:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:10:21.875 18:41:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:10:21.875 18:41:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:21.875 18:41:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.875 18:41:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.876 [2024-12-15 18:41:22.081332] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:21.876 18:41:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.876 18:41:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' d50db7bd-da20-4be5-ae03-b0c1926c1a8e '!=' d50db7bd-da20-4be5-ae03-b0c1926c1a8e ']' 00:10:21.876 18:41:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 87204 00:10:21.876 18:41:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 87204 ']' 00:10:21.876 18:41:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 87204 00:10:21.876 18:41:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:10:21.876 18:41:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:21.876 18:41:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87204 00:10:21.876 18:41:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:21.876 18:41:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:21.876 18:41:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87204' 00:10:21.876 killing process with pid 87204 00:10:21.876 18:41:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 87204 00:10:21.876 [2024-12-15 18:41:22.150734] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:21.876 [2024-12-15 18:41:22.150881] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:21.876 [2024-12-15 18:41:22.150985] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:21.876 [2024-12-15 18:41:22.151031] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:10:21.876 18:41:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 87204 00:10:21.876 [2024-12-15 18:41:22.194324] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:22.134 18:41:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:22.134 00:10:22.134 real 0m7.100s 00:10:22.135 user 0m11.910s 00:10:22.135 sys 0m1.484s 00:10:22.135 18:41:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:22.135 18:41:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.135 ************************************ 00:10:22.135 END TEST raid_superblock_test 00:10:22.135 ************************************ 00:10:22.135 18:41:22 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:10:22.135 18:41:22 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:22.135 18:41:22 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:22.135 18:41:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:22.135 ************************************ 00:10:22.135 START TEST raid_read_error_test 00:10:22.135 ************************************ 00:10:22.135 18:41:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 read 00:10:22.135 18:41:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:10:22.135 18:41:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:10:22.135 18:41:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:22.135 18:41:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:22.135 18:41:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:22.135 18:41:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:22.135 18:41:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:22.135 18:41:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:22.135 18:41:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:22.135 18:41:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:22.135 18:41:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:22.135 18:41:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:22.135 18:41:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:22.135 18:41:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:22.135 18:41:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:10:22.135 18:41:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:22.135 18:41:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:22.135 18:41:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:22.135 18:41:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:22.135 18:41:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:22.135 18:41:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:22.135 18:41:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:22.135 18:41:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:22.135 18:41:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:22.135 18:41:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:10:22.135 18:41:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:10:22.135 18:41:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:22.135 18:41:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.jJBuHeGIuf 00:10:22.135 18:41:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=87676 00:10:22.135 18:41:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:22.135 18:41:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 87676 00:10:22.135 18:41:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 87676 ']' 00:10:22.135 18:41:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:22.135 18:41:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:22.135 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:22.135 18:41:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:22.135 18:41:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:22.135 18:41:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.394 [2024-12-15 18:41:22.592640] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:10:22.394 [2024-12-15 18:41:22.592891] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87676 ] 00:10:22.394 [2024-12-15 18:41:22.756615] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:22.394 [2024-12-15 18:41:22.781436] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:22.394 [2024-12-15 18:41:22.824596] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:22.394 [2024-12-15 18:41:22.824634] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:23.331 18:41:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:23.331 18:41:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:23.331 18:41:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:23.331 18:41:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:23.331 18:41:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.331 18:41:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.331 BaseBdev1_malloc 00:10:23.331 18:41:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.331 18:41:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:23.331 18:41:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.331 18:41:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.331 true 00:10:23.331 18:41:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.331 18:41:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:23.331 18:41:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.331 18:41:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.331 [2024-12-15 18:41:23.508520] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:23.331 [2024-12-15 18:41:23.508675] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:23.331 [2024-12-15 18:41:23.508704] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:23.331 [2024-12-15 18:41:23.508714] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:23.331 [2024-12-15 18:41:23.510864] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:23.331 [2024-12-15 18:41:23.510899] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:23.331 BaseBdev1 00:10:23.331 18:41:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.331 18:41:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:23.331 18:41:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:23.331 18:41:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.331 18:41:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.331 BaseBdev2_malloc 00:10:23.331 18:41:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.331 18:41:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:23.331 18:41:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.331 18:41:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.331 true 00:10:23.331 18:41:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.331 18:41:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:23.331 18:41:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.331 18:41:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.331 [2024-12-15 18:41:23.549036] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:23.331 [2024-12-15 18:41:23.549090] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:23.331 [2024-12-15 18:41:23.549111] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:23.331 [2024-12-15 18:41:23.549120] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:23.331 [2024-12-15 18:41:23.551129] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:23.331 [2024-12-15 18:41:23.551256] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:23.331 BaseBdev2 00:10:23.331 18:41:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.331 18:41:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:23.331 18:41:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:23.331 18:41:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.331 18:41:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.331 BaseBdev3_malloc 00:10:23.331 18:41:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.331 18:41:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:23.331 18:41:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.331 18:41:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.331 true 00:10:23.331 18:41:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.331 18:41:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:23.331 18:41:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.331 18:41:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.331 [2024-12-15 18:41:23.589568] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:23.331 [2024-12-15 18:41:23.589688] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:23.331 [2024-12-15 18:41:23.589714] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:23.331 [2024-12-15 18:41:23.589723] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:23.331 [2024-12-15 18:41:23.591793] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:23.331 [2024-12-15 18:41:23.591845] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:23.331 BaseBdev3 00:10:23.331 18:41:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.331 18:41:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:23.331 18:41:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:10:23.331 18:41:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.331 18:41:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.331 BaseBdev4_malloc 00:10:23.331 18:41:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.331 18:41:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:10:23.331 18:41:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.331 18:41:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.331 true 00:10:23.331 18:41:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.331 18:41:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:10:23.331 18:41:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.331 18:41:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.331 [2024-12-15 18:41:23.638224] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:10:23.331 [2024-12-15 18:41:23.638275] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:23.331 [2024-12-15 18:41:23.638296] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:23.331 [2024-12-15 18:41:23.638304] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:23.331 [2024-12-15 18:41:23.640319] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:23.331 [2024-12-15 18:41:23.640452] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:10:23.331 BaseBdev4 00:10:23.331 18:41:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.331 18:41:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:10:23.331 18:41:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.331 18:41:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.331 [2024-12-15 18:41:23.650271] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:23.331 [2024-12-15 18:41:23.652051] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:23.332 [2024-12-15 18:41:23.652135] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:23.332 [2024-12-15 18:41:23.652186] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:23.332 [2024-12-15 18:41:23.652374] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007080 00:10:23.332 [2024-12-15 18:41:23.652385] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:23.332 [2024-12-15 18:41:23.652642] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:10:23.332 [2024-12-15 18:41:23.652783] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007080 00:10:23.332 [2024-12-15 18:41:23.652806] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007080 00:10:23.332 [2024-12-15 18:41:23.652948] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:23.332 18:41:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.332 18:41:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:10:23.332 18:41:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:23.332 18:41:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:23.332 18:41:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:23.332 18:41:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:23.332 18:41:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:23.332 18:41:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:23.332 18:41:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:23.332 18:41:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:23.332 18:41:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:23.332 18:41:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:23.332 18:41:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:23.332 18:41:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.332 18:41:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.332 18:41:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.332 18:41:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:23.332 "name": "raid_bdev1", 00:10:23.332 "uuid": "7ac2bdfb-3181-4542-bd38-8af1998fae87", 00:10:23.332 "strip_size_kb": 0, 00:10:23.332 "state": "online", 00:10:23.332 "raid_level": "raid1", 00:10:23.332 "superblock": true, 00:10:23.332 "num_base_bdevs": 4, 00:10:23.332 "num_base_bdevs_discovered": 4, 00:10:23.332 "num_base_bdevs_operational": 4, 00:10:23.332 "base_bdevs_list": [ 00:10:23.332 { 00:10:23.332 "name": "BaseBdev1", 00:10:23.332 "uuid": "6d4053ed-8aea-5b75-b8a5-6dda369c9132", 00:10:23.332 "is_configured": true, 00:10:23.332 "data_offset": 2048, 00:10:23.332 "data_size": 63488 00:10:23.332 }, 00:10:23.332 { 00:10:23.332 "name": "BaseBdev2", 00:10:23.332 "uuid": "d25cd243-04e0-5ddf-b8e8-8362caa8ba26", 00:10:23.332 "is_configured": true, 00:10:23.332 "data_offset": 2048, 00:10:23.332 "data_size": 63488 00:10:23.332 }, 00:10:23.332 { 00:10:23.332 "name": "BaseBdev3", 00:10:23.332 "uuid": "bfbadb09-4a67-5906-a530-8cf0107dd3f7", 00:10:23.332 "is_configured": true, 00:10:23.332 "data_offset": 2048, 00:10:23.332 "data_size": 63488 00:10:23.332 }, 00:10:23.332 { 00:10:23.332 "name": "BaseBdev4", 00:10:23.332 "uuid": "c3c7ea88-07e6-56cc-bd2a-0b33bfce7457", 00:10:23.332 "is_configured": true, 00:10:23.332 "data_offset": 2048, 00:10:23.332 "data_size": 63488 00:10:23.332 } 00:10:23.332 ] 00:10:23.332 }' 00:10:23.332 18:41:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:23.332 18:41:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.899 18:41:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:23.899 18:41:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:23.899 [2024-12-15 18:41:24.193832] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:10:24.847 18:41:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:24.847 18:41:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.847 18:41:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.847 18:41:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.847 18:41:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:24.847 18:41:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:10:24.847 18:41:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:10:24.847 18:41:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:10:24.847 18:41:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:10:24.847 18:41:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:24.847 18:41:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:24.847 18:41:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:24.847 18:41:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:24.847 18:41:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:24.847 18:41:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:24.847 18:41:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:24.847 18:41:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:24.847 18:41:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:24.847 18:41:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:24.847 18:41:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:24.847 18:41:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.847 18:41:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.847 18:41:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.847 18:41:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:24.847 "name": "raid_bdev1", 00:10:24.847 "uuid": "7ac2bdfb-3181-4542-bd38-8af1998fae87", 00:10:24.847 "strip_size_kb": 0, 00:10:24.847 "state": "online", 00:10:24.847 "raid_level": "raid1", 00:10:24.847 "superblock": true, 00:10:24.847 "num_base_bdevs": 4, 00:10:24.847 "num_base_bdevs_discovered": 4, 00:10:24.847 "num_base_bdevs_operational": 4, 00:10:24.847 "base_bdevs_list": [ 00:10:24.847 { 00:10:24.847 "name": "BaseBdev1", 00:10:24.847 "uuid": "6d4053ed-8aea-5b75-b8a5-6dda369c9132", 00:10:24.847 "is_configured": true, 00:10:24.847 "data_offset": 2048, 00:10:24.847 "data_size": 63488 00:10:24.847 }, 00:10:24.847 { 00:10:24.847 "name": "BaseBdev2", 00:10:24.847 "uuid": "d25cd243-04e0-5ddf-b8e8-8362caa8ba26", 00:10:24.847 "is_configured": true, 00:10:24.847 "data_offset": 2048, 00:10:24.847 "data_size": 63488 00:10:24.847 }, 00:10:24.847 { 00:10:24.847 "name": "BaseBdev3", 00:10:24.847 "uuid": "bfbadb09-4a67-5906-a530-8cf0107dd3f7", 00:10:24.847 "is_configured": true, 00:10:24.847 "data_offset": 2048, 00:10:24.847 "data_size": 63488 00:10:24.847 }, 00:10:24.847 { 00:10:24.847 "name": "BaseBdev4", 00:10:24.847 "uuid": "c3c7ea88-07e6-56cc-bd2a-0b33bfce7457", 00:10:24.847 "is_configured": true, 00:10:24.847 "data_offset": 2048, 00:10:24.847 "data_size": 63488 00:10:24.847 } 00:10:24.847 ] 00:10:24.847 }' 00:10:24.847 18:41:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:24.847 18:41:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.417 18:41:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:25.417 18:41:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.417 18:41:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.417 [2024-12-15 18:41:25.568296] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:25.417 [2024-12-15 18:41:25.568448] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:25.417 [2024-12-15 18:41:25.571068] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:25.417 [2024-12-15 18:41:25.571123] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:25.417 [2024-12-15 18:41:25.571242] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:25.417 [2024-12-15 18:41:25.571252] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state offline 00:10:25.417 18:41:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.417 { 00:10:25.417 "results": [ 00:10:25.417 { 00:10:25.417 "job": "raid_bdev1", 00:10:25.417 "core_mask": "0x1", 00:10:25.418 "workload": "randrw", 00:10:25.418 "percentage": 50, 00:10:25.418 "status": "finished", 00:10:25.418 "queue_depth": 1, 00:10:25.418 "io_size": 131072, 00:10:25.418 "runtime": 1.375431, 00:10:25.418 "iops": 11418.239082876567, 00:10:25.418 "mibps": 1427.279885359571, 00:10:25.418 "io_failed": 0, 00:10:25.418 "io_timeout": 0, 00:10:25.418 "avg_latency_us": 84.94403868264357, 00:10:25.418 "min_latency_us": 23.58777292576419, 00:10:25.418 "max_latency_us": 1423.7624454148472 00:10:25.418 } 00:10:25.418 ], 00:10:25.418 "core_count": 1 00:10:25.418 } 00:10:25.418 18:41:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 87676 00:10:25.418 18:41:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 87676 ']' 00:10:25.418 18:41:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 87676 00:10:25.418 18:41:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:10:25.418 18:41:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:25.418 18:41:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87676 00:10:25.418 killing process with pid 87676 00:10:25.418 18:41:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:25.418 18:41:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:25.418 18:41:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87676' 00:10:25.418 18:41:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 87676 00:10:25.418 [2024-12-15 18:41:25.611894] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:25.418 18:41:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 87676 00:10:25.418 [2024-12-15 18:41:25.647036] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:25.678 18:41:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.jJBuHeGIuf 00:10:25.678 18:41:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:25.678 18:41:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:25.678 18:41:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:10:25.678 18:41:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:10:25.678 18:41:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:25.678 18:41:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:25.678 18:41:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:10:25.678 00:10:25.678 real 0m3.396s 00:10:25.678 user 0m4.325s 00:10:25.678 sys 0m0.571s 00:10:25.678 ************************************ 00:10:25.678 END TEST raid_read_error_test 00:10:25.678 ************************************ 00:10:25.678 18:41:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:25.678 18:41:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.678 18:41:25 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:10:25.678 18:41:25 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:25.678 18:41:25 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:25.678 18:41:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:25.678 ************************************ 00:10:25.678 START TEST raid_write_error_test 00:10:25.678 ************************************ 00:10:25.678 18:41:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 write 00:10:25.678 18:41:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:10:25.678 18:41:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:10:25.678 18:41:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:25.678 18:41:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:25.678 18:41:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:25.678 18:41:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:25.678 18:41:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:25.678 18:41:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:25.678 18:41:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:25.678 18:41:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:25.678 18:41:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:25.678 18:41:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:25.678 18:41:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:25.678 18:41:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:25.678 18:41:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:10:25.678 18:41:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:25.678 18:41:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:25.678 18:41:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:25.678 18:41:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:25.678 18:41:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:25.678 18:41:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:25.678 18:41:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:25.678 18:41:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:25.678 18:41:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:25.678 18:41:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:10:25.678 18:41:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:10:25.678 18:41:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:25.679 18:41:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.sSae3HjEBS 00:10:25.679 18:41:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=87811 00:10:25.679 18:41:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:25.679 18:41:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 87811 00:10:25.679 18:41:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 87811 ']' 00:10:25.679 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:25.679 18:41:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:25.679 18:41:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:25.679 18:41:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:25.679 18:41:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:25.679 18:41:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.679 [2024-12-15 18:41:26.063340] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:10:25.679 [2024-12-15 18:41:26.063492] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87811 ] 00:10:25.939 [2024-12-15 18:41:26.241179] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:25.939 [2024-12-15 18:41:26.266583] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:25.939 [2024-12-15 18:41:26.309148] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:25.939 [2024-12-15 18:41:26.309206] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:26.510 18:41:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:26.510 18:41:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:26.510 18:41:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:26.510 18:41:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:26.510 18:41:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.510 18:41:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.510 BaseBdev1_malloc 00:10:26.510 18:41:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.510 18:41:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:26.510 18:41:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.510 18:41:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.510 true 00:10:26.510 18:41:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.510 18:41:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:26.510 18:41:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.510 18:41:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.510 [2024-12-15 18:41:26.928948] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:26.510 [2024-12-15 18:41:26.929014] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:26.510 [2024-12-15 18:41:26.929043] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:26.510 [2024-12-15 18:41:26.929052] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:26.510 [2024-12-15 18:41:26.931161] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:26.510 [2024-12-15 18:41:26.931215] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:26.510 BaseBdev1 00:10:26.510 18:41:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.510 18:41:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:26.510 18:41:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:26.510 18:41:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.510 18:41:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.776 BaseBdev2_malloc 00:10:26.776 18:41:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.776 18:41:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:26.776 18:41:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.776 18:41:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.776 true 00:10:26.776 18:41:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.776 18:41:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:26.776 18:41:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.776 18:41:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.776 [2024-12-15 18:41:26.969684] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:26.776 [2024-12-15 18:41:26.969740] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:26.776 [2024-12-15 18:41:26.969761] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:26.776 [2024-12-15 18:41:26.969769] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:26.776 [2024-12-15 18:41:26.971822] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:26.776 [2024-12-15 18:41:26.971855] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:26.776 BaseBdev2 00:10:26.776 18:41:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.776 18:41:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:26.776 18:41:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:26.776 18:41:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.776 18:41:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.776 BaseBdev3_malloc 00:10:26.776 18:41:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.776 18:41:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:26.776 18:41:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.776 18:41:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.776 true 00:10:26.776 18:41:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.776 18:41:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:26.776 18:41:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.776 18:41:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.776 [2024-12-15 18:41:27.010362] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:26.776 [2024-12-15 18:41:27.010415] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:26.776 [2024-12-15 18:41:27.010436] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:26.776 [2024-12-15 18:41:27.010445] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:26.776 [2024-12-15 18:41:27.012480] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:26.776 [2024-12-15 18:41:27.012516] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:26.776 BaseBdev3 00:10:26.776 18:41:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.776 18:41:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:26.776 18:41:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:10:26.776 18:41:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.776 18:41:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.776 BaseBdev4_malloc 00:10:26.776 18:41:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.776 18:41:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:10:26.776 18:41:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.776 18:41:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.776 true 00:10:26.776 18:41:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.776 18:41:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:10:26.776 18:41:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.776 18:41:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.776 [2024-12-15 18:41:27.061602] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:10:26.776 [2024-12-15 18:41:27.061655] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:26.776 [2024-12-15 18:41:27.061676] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:26.776 [2024-12-15 18:41:27.061685] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:26.776 [2024-12-15 18:41:27.063694] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:26.776 [2024-12-15 18:41:27.063731] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:10:26.776 BaseBdev4 00:10:26.776 18:41:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.776 18:41:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:10:26.776 18:41:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.776 18:41:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.776 [2024-12-15 18:41:27.073633] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:26.776 [2024-12-15 18:41:27.075440] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:26.776 [2024-12-15 18:41:27.075529] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:26.776 [2024-12-15 18:41:27.075581] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:26.776 [2024-12-15 18:41:27.075771] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007080 00:10:26.777 [2024-12-15 18:41:27.075788] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:26.777 [2024-12-15 18:41:27.076064] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:10:26.777 [2024-12-15 18:41:27.076203] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007080 00:10:26.777 [2024-12-15 18:41:27.076223] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007080 00:10:26.777 [2024-12-15 18:41:27.076337] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:26.777 18:41:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.777 18:41:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:10:26.777 18:41:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:26.777 18:41:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:26.777 18:41:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:26.777 18:41:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:26.777 18:41:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:26.777 18:41:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:26.777 18:41:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:26.777 18:41:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:26.777 18:41:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:26.777 18:41:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:26.777 18:41:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:26.777 18:41:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.777 18:41:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.777 18:41:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.777 18:41:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:26.777 "name": "raid_bdev1", 00:10:26.777 "uuid": "0dbe6842-8f08-4c27-b2f9-2fe9ac2d9a46", 00:10:26.777 "strip_size_kb": 0, 00:10:26.777 "state": "online", 00:10:26.777 "raid_level": "raid1", 00:10:26.777 "superblock": true, 00:10:26.777 "num_base_bdevs": 4, 00:10:26.777 "num_base_bdevs_discovered": 4, 00:10:26.777 "num_base_bdevs_operational": 4, 00:10:26.777 "base_bdevs_list": [ 00:10:26.777 { 00:10:26.777 "name": "BaseBdev1", 00:10:26.777 "uuid": "b28d7da9-09f7-5da0-b046-675b87de1ee9", 00:10:26.777 "is_configured": true, 00:10:26.777 "data_offset": 2048, 00:10:26.777 "data_size": 63488 00:10:26.777 }, 00:10:26.777 { 00:10:26.777 "name": "BaseBdev2", 00:10:26.777 "uuid": "5911980d-73ef-5fbc-adc7-d3c05324ab0e", 00:10:26.777 "is_configured": true, 00:10:26.777 "data_offset": 2048, 00:10:26.777 "data_size": 63488 00:10:26.777 }, 00:10:26.777 { 00:10:26.777 "name": "BaseBdev3", 00:10:26.777 "uuid": "c952b121-3aba-567f-a428-3fc0b9052c5e", 00:10:26.777 "is_configured": true, 00:10:26.777 "data_offset": 2048, 00:10:26.777 "data_size": 63488 00:10:26.777 }, 00:10:26.777 { 00:10:26.777 "name": "BaseBdev4", 00:10:26.777 "uuid": "0a57c600-9116-5a90-bf44-9554bc2c6ec5", 00:10:26.777 "is_configured": true, 00:10:26.777 "data_offset": 2048, 00:10:26.777 "data_size": 63488 00:10:26.777 } 00:10:26.777 ] 00:10:26.777 }' 00:10:26.777 18:41:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:26.777 18:41:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.364 18:41:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:27.364 18:41:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:27.364 [2024-12-15 18:41:27.601179] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:10:28.305 18:41:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:28.305 18:41:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.305 18:41:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.305 [2024-12-15 18:41:28.515713] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:10:28.305 [2024-12-15 18:41:28.515892] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:28.305 [2024-12-15 18:41:28.516146] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000068a0 00:10:28.305 18:41:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.305 18:41:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:28.305 18:41:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:10:28.305 18:41:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:10:28.305 18:41:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:10:28.305 18:41:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:28.305 18:41:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:28.305 18:41:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:28.305 18:41:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:28.305 18:41:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:28.305 18:41:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:28.305 18:41:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:28.305 18:41:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:28.305 18:41:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:28.305 18:41:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:28.305 18:41:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.305 18:41:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.305 18:41:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:28.305 18:41:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.305 18:41:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.305 18:41:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:28.305 "name": "raid_bdev1", 00:10:28.305 "uuid": "0dbe6842-8f08-4c27-b2f9-2fe9ac2d9a46", 00:10:28.305 "strip_size_kb": 0, 00:10:28.305 "state": "online", 00:10:28.305 "raid_level": "raid1", 00:10:28.305 "superblock": true, 00:10:28.305 "num_base_bdevs": 4, 00:10:28.305 "num_base_bdevs_discovered": 3, 00:10:28.305 "num_base_bdevs_operational": 3, 00:10:28.305 "base_bdevs_list": [ 00:10:28.305 { 00:10:28.305 "name": null, 00:10:28.305 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:28.305 "is_configured": false, 00:10:28.305 "data_offset": 0, 00:10:28.305 "data_size": 63488 00:10:28.305 }, 00:10:28.305 { 00:10:28.305 "name": "BaseBdev2", 00:10:28.305 "uuid": "5911980d-73ef-5fbc-adc7-d3c05324ab0e", 00:10:28.305 "is_configured": true, 00:10:28.305 "data_offset": 2048, 00:10:28.305 "data_size": 63488 00:10:28.305 }, 00:10:28.305 { 00:10:28.305 "name": "BaseBdev3", 00:10:28.305 "uuid": "c952b121-3aba-567f-a428-3fc0b9052c5e", 00:10:28.305 "is_configured": true, 00:10:28.305 "data_offset": 2048, 00:10:28.305 "data_size": 63488 00:10:28.305 }, 00:10:28.305 { 00:10:28.305 "name": "BaseBdev4", 00:10:28.305 "uuid": "0a57c600-9116-5a90-bf44-9554bc2c6ec5", 00:10:28.305 "is_configured": true, 00:10:28.305 "data_offset": 2048, 00:10:28.305 "data_size": 63488 00:10:28.305 } 00:10:28.305 ] 00:10:28.305 }' 00:10:28.305 18:41:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:28.305 18:41:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.566 18:41:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:28.566 18:41:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.566 18:41:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.566 [2024-12-15 18:41:28.886385] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:28.566 [2024-12-15 18:41:28.886516] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:28.566 [2024-12-15 18:41:28.889194] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:28.566 [2024-12-15 18:41:28.889308] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:28.566 [2024-12-15 18:41:28.889413] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:28.566 [2024-12-15 18:41:28.889425] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state offline 00:10:28.566 { 00:10:28.566 "results": [ 00:10:28.566 { 00:10:28.566 "job": "raid_bdev1", 00:10:28.566 "core_mask": "0x1", 00:10:28.566 "workload": "randrw", 00:10:28.566 "percentage": 50, 00:10:28.566 "status": "finished", 00:10:28.566 "queue_depth": 1, 00:10:28.566 "io_size": 131072, 00:10:28.566 "runtime": 1.285982, 00:10:28.566 "iops": 11986.170879530195, 00:10:28.566 "mibps": 1498.2713599412743, 00:10:28.566 "io_failed": 0, 00:10:28.566 "io_timeout": 0, 00:10:28.566 "avg_latency_us": 80.69427498281775, 00:10:28.566 "min_latency_us": 22.69344978165939, 00:10:28.566 "max_latency_us": 1531.0812227074236 00:10:28.566 } 00:10:28.566 ], 00:10:28.566 "core_count": 1 00:10:28.566 } 00:10:28.566 18:41:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.566 18:41:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 87811 00:10:28.566 18:41:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 87811 ']' 00:10:28.566 18:41:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 87811 00:10:28.566 18:41:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:10:28.566 18:41:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:28.566 18:41:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87811 00:10:28.567 18:41:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:28.567 18:41:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:28.567 18:41:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87811' 00:10:28.567 killing process with pid 87811 00:10:28.567 18:41:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 87811 00:10:28.567 [2024-12-15 18:41:28.935733] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:28.567 18:41:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 87811 00:10:28.567 [2024-12-15 18:41:28.971656] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:28.827 18:41:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.sSae3HjEBS 00:10:28.827 18:41:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:28.827 18:41:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:28.827 ************************************ 00:10:28.827 END TEST raid_write_error_test 00:10:28.827 ************************************ 00:10:28.827 18:41:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:10:28.827 18:41:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:10:28.827 18:41:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:28.827 18:41:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:28.827 18:41:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:10:28.827 00:10:28.827 real 0m3.246s 00:10:28.827 user 0m4.022s 00:10:28.827 sys 0m0.557s 00:10:28.827 18:41:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:28.827 18:41:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.827 18:41:29 bdev_raid -- bdev/bdev_raid.sh@976 -- # '[' true = true ']' 00:10:28.827 18:41:29 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:10:28.827 18:41:29 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:10:28.827 18:41:29 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:10:28.827 18:41:29 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:28.827 18:41:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:29.087 ************************************ 00:10:29.087 START TEST raid_rebuild_test 00:10:29.087 ************************************ 00:10:29.087 18:41:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false false true 00:10:29.088 18:41:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:10:29.088 18:41:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:10:29.088 18:41:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:10:29.088 18:41:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:10:29.088 18:41:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:10:29.088 18:41:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:10:29.088 18:41:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:10:29.088 18:41:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:10:29.088 18:41:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:10:29.088 18:41:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:10:29.088 18:41:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:10:29.088 18:41:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:10:29.088 18:41:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:10:29.088 18:41:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:29.088 18:41:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:10:29.088 18:41:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:10:29.088 18:41:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:10:29.088 18:41:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:10:29.088 18:41:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:10:29.088 18:41:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:10:29.088 18:41:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:10:29.088 18:41:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:10:29.088 18:41:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:10:29.088 18:41:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=87938 00:10:29.088 18:41:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:10:29.088 18:41:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 87938 00:10:29.088 18:41:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 87938 ']' 00:10:29.088 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:29.088 18:41:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:29.088 18:41:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:29.088 18:41:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:29.088 18:41:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:29.088 18:41:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.088 [2024-12-15 18:41:29.367493] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:10:29.088 I/O size of 3145728 is greater than zero copy threshold (65536). 00:10:29.088 Zero copy mechanism will not be used. 00:10:29.088 [2024-12-15 18:41:29.367724] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87938 ] 00:10:29.348 [2024-12-15 18:41:29.538107] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:29.348 [2024-12-15 18:41:29.564022] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:29.348 [2024-12-15 18:41:29.606546] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:29.348 [2024-12-15 18:41:29.606597] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:29.919 18:41:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:29.919 18:41:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:10:29.919 18:41:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:10:29.919 18:41:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:29.919 18:41:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.919 18:41:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.919 BaseBdev1_malloc 00:10:29.919 18:41:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.919 18:41:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:10:29.919 18:41:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.919 18:41:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.919 [2024-12-15 18:41:30.230209] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:10:29.919 [2024-12-15 18:41:30.230270] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:29.920 [2024-12-15 18:41:30.230295] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:29.920 [2024-12-15 18:41:30.230307] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:29.920 [2024-12-15 18:41:30.232462] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:29.920 [2024-12-15 18:41:30.232496] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:29.920 BaseBdev1 00:10:29.920 18:41:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.920 18:41:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:10:29.920 18:41:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:29.920 18:41:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.920 18:41:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.920 BaseBdev2_malloc 00:10:29.920 18:41:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.920 18:41:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:10:29.920 18:41:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.920 18:41:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.920 [2024-12-15 18:41:30.258860] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:10:29.920 [2024-12-15 18:41:30.258912] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:29.920 [2024-12-15 18:41:30.258931] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:29.920 [2024-12-15 18:41:30.258940] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:29.920 [2024-12-15 18:41:30.261009] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:29.920 [2024-12-15 18:41:30.261042] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:29.920 BaseBdev2 00:10:29.920 18:41:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.920 18:41:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:10:29.920 18:41:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.920 18:41:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.920 spare_malloc 00:10:29.920 18:41:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.920 18:41:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:10:29.920 18:41:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.920 18:41:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.920 spare_delay 00:10:29.920 18:41:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.920 18:41:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:10:29.920 18:41:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.920 18:41:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.920 [2024-12-15 18:41:30.299354] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:10:29.920 [2024-12-15 18:41:30.299422] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:29.920 [2024-12-15 18:41:30.299442] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:10:29.920 [2024-12-15 18:41:30.299451] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:29.920 [2024-12-15 18:41:30.301462] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:29.920 [2024-12-15 18:41:30.301494] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:10:29.920 spare 00:10:29.920 18:41:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.920 18:41:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:10:29.920 18:41:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.920 18:41:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.920 [2024-12-15 18:41:30.311369] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:29.920 [2024-12-15 18:41:30.313135] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:29.920 [2024-12-15 18:41:30.313223] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:10:29.920 [2024-12-15 18:41:30.313233] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:10:29.920 [2024-12-15 18:41:30.313472] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:10:29.920 [2024-12-15 18:41:30.313604] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:10:29.920 [2024-12-15 18:41:30.313622] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:10:29.920 [2024-12-15 18:41:30.313746] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:29.920 18:41:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.920 18:41:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:29.920 18:41:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:29.920 18:41:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:29.920 18:41:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:29.920 18:41:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:29.920 18:41:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:29.920 18:41:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:29.920 18:41:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:29.920 18:41:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:29.920 18:41:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:29.920 18:41:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.920 18:41:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:29.920 18:41:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.920 18:41:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.920 18:41:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.180 18:41:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:30.180 "name": "raid_bdev1", 00:10:30.180 "uuid": "f0387408-8914-4c3c-9565-7520657438a5", 00:10:30.180 "strip_size_kb": 0, 00:10:30.180 "state": "online", 00:10:30.180 "raid_level": "raid1", 00:10:30.180 "superblock": false, 00:10:30.180 "num_base_bdevs": 2, 00:10:30.180 "num_base_bdevs_discovered": 2, 00:10:30.180 "num_base_bdevs_operational": 2, 00:10:30.180 "base_bdevs_list": [ 00:10:30.180 { 00:10:30.180 "name": "BaseBdev1", 00:10:30.180 "uuid": "55777560-bb3a-5ec1-aa4b-d0fb647ffddb", 00:10:30.180 "is_configured": true, 00:10:30.180 "data_offset": 0, 00:10:30.180 "data_size": 65536 00:10:30.180 }, 00:10:30.180 { 00:10:30.180 "name": "BaseBdev2", 00:10:30.180 "uuid": "2ae932bc-f67d-59da-a8ef-c167c34942c0", 00:10:30.180 "is_configured": true, 00:10:30.180 "data_offset": 0, 00:10:30.180 "data_size": 65536 00:10:30.180 } 00:10:30.181 ] 00:10:30.181 }' 00:10:30.181 18:41:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:30.181 18:41:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.441 18:41:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:30.441 18:41:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.441 18:41:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.441 18:41:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:10:30.441 [2024-12-15 18:41:30.663054] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:30.441 18:41:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.441 18:41:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:10:30.441 18:41:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.441 18:41:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.441 18:41:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.441 18:41:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:10:30.441 18:41:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.441 18:41:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:10:30.441 18:41:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:10:30.441 18:41:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:10:30.441 18:41:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:10:30.441 18:41:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:10:30.441 18:41:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:10:30.441 18:41:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:10:30.441 18:41:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:30.441 18:41:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:10:30.441 18:41:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:30.441 18:41:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:10:30.441 18:41:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:30.441 18:41:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:10:30.441 18:41:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:10:30.701 [2024-12-15 18:41:30.946332] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:30.701 /dev/nbd0 00:10:30.701 18:41:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:30.701 18:41:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:30.701 18:41:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:10:30.701 18:41:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:10:30.701 18:41:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:30.701 18:41:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:30.701 18:41:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:10:30.701 18:41:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:10:30.701 18:41:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:30.701 18:41:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:30.701 18:41:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:30.701 1+0 records in 00:10:30.701 1+0 records out 00:10:30.701 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000357818 s, 11.4 MB/s 00:10:30.701 18:41:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:30.701 18:41:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:10:30.701 18:41:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:30.701 18:41:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:30.701 18:41:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:10:30.701 18:41:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:30.701 18:41:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:10:30.701 18:41:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:10:30.701 18:41:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:10:30.701 18:41:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:10:34.899 65536+0 records in 00:10:34.899 65536+0 records out 00:10:34.899 33554432 bytes (34 MB, 32 MiB) copied, 4.12737 s, 8.1 MB/s 00:10:34.899 18:41:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:10:34.899 18:41:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:10:34.899 18:41:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:10:34.899 18:41:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:34.899 18:41:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:10:34.899 18:41:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:34.899 18:41:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:10:35.159 18:41:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:35.159 [2024-12-15 18:41:35.361999] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:35.159 18:41:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:35.159 18:41:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:35.159 18:41:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:35.159 18:41:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:35.159 18:41:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:35.159 18:41:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:10:35.159 18:41:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:10:35.159 18:41:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:10:35.159 18:41:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.159 18:41:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.159 [2024-12-15 18:41:35.374061] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:35.159 18:41:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.159 18:41:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:10:35.159 18:41:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:35.159 18:41:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:35.159 18:41:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:35.159 18:41:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:35.159 18:41:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:35.159 18:41:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:35.159 18:41:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:35.159 18:41:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:35.159 18:41:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:35.159 18:41:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:35.159 18:41:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:35.159 18:41:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.159 18:41:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.159 18:41:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.159 18:41:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:35.159 "name": "raid_bdev1", 00:10:35.159 "uuid": "f0387408-8914-4c3c-9565-7520657438a5", 00:10:35.159 "strip_size_kb": 0, 00:10:35.159 "state": "online", 00:10:35.159 "raid_level": "raid1", 00:10:35.159 "superblock": false, 00:10:35.159 "num_base_bdevs": 2, 00:10:35.159 "num_base_bdevs_discovered": 1, 00:10:35.159 "num_base_bdevs_operational": 1, 00:10:35.159 "base_bdevs_list": [ 00:10:35.159 { 00:10:35.159 "name": null, 00:10:35.159 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:35.159 "is_configured": false, 00:10:35.159 "data_offset": 0, 00:10:35.159 "data_size": 65536 00:10:35.159 }, 00:10:35.159 { 00:10:35.159 "name": "BaseBdev2", 00:10:35.159 "uuid": "2ae932bc-f67d-59da-a8ef-c167c34942c0", 00:10:35.159 "is_configured": true, 00:10:35.159 "data_offset": 0, 00:10:35.159 "data_size": 65536 00:10:35.159 } 00:10:35.159 ] 00:10:35.159 }' 00:10:35.159 18:41:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:35.159 18:41:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.419 18:41:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:10:35.419 18:41:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.419 18:41:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.419 [2024-12-15 18:41:35.833326] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:10:35.419 [2024-12-15 18:41:35.838383] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09a30 00:10:35.419 18:41:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.419 18:41:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:10:35.419 [2024-12-15 18:41:35.840262] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:10:36.800 18:41:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:10:36.800 18:41:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:36.800 18:41:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:10:36.800 18:41:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:10:36.801 18:41:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:36.801 18:41:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.801 18:41:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.801 18:41:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:36.801 18:41:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.801 18:41:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.801 18:41:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:36.801 "name": "raid_bdev1", 00:10:36.801 "uuid": "f0387408-8914-4c3c-9565-7520657438a5", 00:10:36.801 "strip_size_kb": 0, 00:10:36.801 "state": "online", 00:10:36.801 "raid_level": "raid1", 00:10:36.801 "superblock": false, 00:10:36.801 "num_base_bdevs": 2, 00:10:36.801 "num_base_bdevs_discovered": 2, 00:10:36.801 "num_base_bdevs_operational": 2, 00:10:36.801 "process": { 00:10:36.801 "type": "rebuild", 00:10:36.801 "target": "spare", 00:10:36.801 "progress": { 00:10:36.801 "blocks": 20480, 00:10:36.801 "percent": 31 00:10:36.801 } 00:10:36.801 }, 00:10:36.801 "base_bdevs_list": [ 00:10:36.801 { 00:10:36.801 "name": "spare", 00:10:36.801 "uuid": "c3219867-ec68-5eb0-ad43-8087e6c146ea", 00:10:36.801 "is_configured": true, 00:10:36.801 "data_offset": 0, 00:10:36.801 "data_size": 65536 00:10:36.801 }, 00:10:36.801 { 00:10:36.801 "name": "BaseBdev2", 00:10:36.801 "uuid": "2ae932bc-f67d-59da-a8ef-c167c34942c0", 00:10:36.801 "is_configured": true, 00:10:36.801 "data_offset": 0, 00:10:36.801 "data_size": 65536 00:10:36.801 } 00:10:36.801 ] 00:10:36.801 }' 00:10:36.801 18:41:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:36.801 18:41:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:10:36.801 18:41:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:36.801 18:41:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:10:36.801 18:41:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:10:36.801 18:41:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.801 18:41:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.801 [2024-12-15 18:41:36.988944] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:10:36.801 [2024-12-15 18:41:37.045339] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:10:36.801 [2024-12-15 18:41:37.045394] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:36.801 [2024-12-15 18:41:37.045413] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:10:36.801 [2024-12-15 18:41:37.045420] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:10:36.801 18:41:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.801 18:41:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:10:36.801 18:41:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:36.801 18:41:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:36.801 18:41:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:36.801 18:41:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:36.801 18:41:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:36.801 18:41:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:36.801 18:41:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:36.801 18:41:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:36.801 18:41:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:36.801 18:41:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.801 18:41:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:36.801 18:41:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.801 18:41:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.801 18:41:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.801 18:41:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:36.801 "name": "raid_bdev1", 00:10:36.801 "uuid": "f0387408-8914-4c3c-9565-7520657438a5", 00:10:36.801 "strip_size_kb": 0, 00:10:36.801 "state": "online", 00:10:36.801 "raid_level": "raid1", 00:10:36.801 "superblock": false, 00:10:36.801 "num_base_bdevs": 2, 00:10:36.801 "num_base_bdevs_discovered": 1, 00:10:36.801 "num_base_bdevs_operational": 1, 00:10:36.801 "base_bdevs_list": [ 00:10:36.801 { 00:10:36.801 "name": null, 00:10:36.801 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:36.801 "is_configured": false, 00:10:36.801 "data_offset": 0, 00:10:36.801 "data_size": 65536 00:10:36.801 }, 00:10:36.801 { 00:10:36.801 "name": "BaseBdev2", 00:10:36.801 "uuid": "2ae932bc-f67d-59da-a8ef-c167c34942c0", 00:10:36.801 "is_configured": true, 00:10:36.801 "data_offset": 0, 00:10:36.801 "data_size": 65536 00:10:36.801 } 00:10:36.801 ] 00:10:36.801 }' 00:10:36.801 18:41:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:36.801 18:41:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.062 18:41:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:10:37.062 18:41:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:37.062 18:41:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:10:37.062 18:41:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:10:37.062 18:41:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:37.062 18:41:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:37.062 18:41:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.062 18:41:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.062 18:41:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.062 18:41:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.321 18:41:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:37.321 "name": "raid_bdev1", 00:10:37.321 "uuid": "f0387408-8914-4c3c-9565-7520657438a5", 00:10:37.321 "strip_size_kb": 0, 00:10:37.321 "state": "online", 00:10:37.321 "raid_level": "raid1", 00:10:37.321 "superblock": false, 00:10:37.321 "num_base_bdevs": 2, 00:10:37.322 "num_base_bdevs_discovered": 1, 00:10:37.322 "num_base_bdevs_operational": 1, 00:10:37.322 "base_bdevs_list": [ 00:10:37.322 { 00:10:37.322 "name": null, 00:10:37.322 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:37.322 "is_configured": false, 00:10:37.322 "data_offset": 0, 00:10:37.322 "data_size": 65536 00:10:37.322 }, 00:10:37.322 { 00:10:37.322 "name": "BaseBdev2", 00:10:37.322 "uuid": "2ae932bc-f67d-59da-a8ef-c167c34942c0", 00:10:37.322 "is_configured": true, 00:10:37.322 "data_offset": 0, 00:10:37.322 "data_size": 65536 00:10:37.322 } 00:10:37.322 ] 00:10:37.322 }' 00:10:37.322 18:41:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:37.322 18:41:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:10:37.322 18:41:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:37.322 18:41:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:10:37.322 18:41:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:10:37.322 18:41:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.322 18:41:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.322 [2024-12-15 18:41:37.593552] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:10:37.322 [2024-12-15 18:41:37.598535] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09b00 00:10:37.322 18:41:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.322 18:41:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:10:37.322 [2024-12-15 18:41:37.600382] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:10:38.259 18:41:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:10:38.259 18:41:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:38.259 18:41:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:10:38.259 18:41:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:10:38.259 18:41:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:38.259 18:41:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.259 18:41:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.259 18:41:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:38.259 18:41:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.259 18:41:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.259 18:41:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:38.259 "name": "raid_bdev1", 00:10:38.259 "uuid": "f0387408-8914-4c3c-9565-7520657438a5", 00:10:38.259 "strip_size_kb": 0, 00:10:38.259 "state": "online", 00:10:38.259 "raid_level": "raid1", 00:10:38.259 "superblock": false, 00:10:38.259 "num_base_bdevs": 2, 00:10:38.259 "num_base_bdevs_discovered": 2, 00:10:38.259 "num_base_bdevs_operational": 2, 00:10:38.259 "process": { 00:10:38.259 "type": "rebuild", 00:10:38.259 "target": "spare", 00:10:38.259 "progress": { 00:10:38.259 "blocks": 20480, 00:10:38.259 "percent": 31 00:10:38.259 } 00:10:38.259 }, 00:10:38.259 "base_bdevs_list": [ 00:10:38.259 { 00:10:38.259 "name": "spare", 00:10:38.259 "uuid": "c3219867-ec68-5eb0-ad43-8087e6c146ea", 00:10:38.259 "is_configured": true, 00:10:38.259 "data_offset": 0, 00:10:38.259 "data_size": 65536 00:10:38.259 }, 00:10:38.259 { 00:10:38.259 "name": "BaseBdev2", 00:10:38.259 "uuid": "2ae932bc-f67d-59da-a8ef-c167c34942c0", 00:10:38.259 "is_configured": true, 00:10:38.259 "data_offset": 0, 00:10:38.259 "data_size": 65536 00:10:38.259 } 00:10:38.259 ] 00:10:38.259 }' 00:10:38.259 18:41:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:38.518 18:41:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:10:38.518 18:41:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:38.518 18:41:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:10:38.518 18:41:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:10:38.518 18:41:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:10:38.518 18:41:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:10:38.518 18:41:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:10:38.518 18:41:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=295 00:10:38.518 18:41:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:10:38.518 18:41:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:10:38.518 18:41:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:38.518 18:41:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:10:38.518 18:41:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:10:38.518 18:41:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:38.518 18:41:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.518 18:41:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:38.518 18:41:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.518 18:41:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.518 18:41:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.518 18:41:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:38.518 "name": "raid_bdev1", 00:10:38.518 "uuid": "f0387408-8914-4c3c-9565-7520657438a5", 00:10:38.518 "strip_size_kb": 0, 00:10:38.518 "state": "online", 00:10:38.518 "raid_level": "raid1", 00:10:38.518 "superblock": false, 00:10:38.518 "num_base_bdevs": 2, 00:10:38.518 "num_base_bdevs_discovered": 2, 00:10:38.518 "num_base_bdevs_operational": 2, 00:10:38.518 "process": { 00:10:38.518 "type": "rebuild", 00:10:38.518 "target": "spare", 00:10:38.518 "progress": { 00:10:38.518 "blocks": 22528, 00:10:38.518 "percent": 34 00:10:38.518 } 00:10:38.518 }, 00:10:38.518 "base_bdevs_list": [ 00:10:38.518 { 00:10:38.518 "name": "spare", 00:10:38.518 "uuid": "c3219867-ec68-5eb0-ad43-8087e6c146ea", 00:10:38.518 "is_configured": true, 00:10:38.518 "data_offset": 0, 00:10:38.518 "data_size": 65536 00:10:38.518 }, 00:10:38.518 { 00:10:38.518 "name": "BaseBdev2", 00:10:38.518 "uuid": "2ae932bc-f67d-59da-a8ef-c167c34942c0", 00:10:38.518 "is_configured": true, 00:10:38.518 "data_offset": 0, 00:10:38.518 "data_size": 65536 00:10:38.518 } 00:10:38.518 ] 00:10:38.518 }' 00:10:38.518 18:41:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:38.518 18:41:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:10:38.519 18:41:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:38.519 18:41:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:10:38.519 18:41:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:10:39.457 18:41:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:10:39.457 18:41:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:10:39.457 18:41:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:39.457 18:41:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:10:39.716 18:41:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:10:39.716 18:41:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:39.716 18:41:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.716 18:41:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.716 18:41:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:39.716 18:41:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.716 18:41:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.716 18:41:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:39.716 "name": "raid_bdev1", 00:10:39.716 "uuid": "f0387408-8914-4c3c-9565-7520657438a5", 00:10:39.717 "strip_size_kb": 0, 00:10:39.717 "state": "online", 00:10:39.717 "raid_level": "raid1", 00:10:39.717 "superblock": false, 00:10:39.717 "num_base_bdevs": 2, 00:10:39.717 "num_base_bdevs_discovered": 2, 00:10:39.717 "num_base_bdevs_operational": 2, 00:10:39.717 "process": { 00:10:39.717 "type": "rebuild", 00:10:39.717 "target": "spare", 00:10:39.717 "progress": { 00:10:39.717 "blocks": 47104, 00:10:39.717 "percent": 71 00:10:39.717 } 00:10:39.717 }, 00:10:39.717 "base_bdevs_list": [ 00:10:39.717 { 00:10:39.717 "name": "spare", 00:10:39.717 "uuid": "c3219867-ec68-5eb0-ad43-8087e6c146ea", 00:10:39.717 "is_configured": true, 00:10:39.717 "data_offset": 0, 00:10:39.717 "data_size": 65536 00:10:39.717 }, 00:10:39.717 { 00:10:39.717 "name": "BaseBdev2", 00:10:39.717 "uuid": "2ae932bc-f67d-59da-a8ef-c167c34942c0", 00:10:39.717 "is_configured": true, 00:10:39.717 "data_offset": 0, 00:10:39.717 "data_size": 65536 00:10:39.717 } 00:10:39.717 ] 00:10:39.717 }' 00:10:39.717 18:41:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:39.717 18:41:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:10:39.717 18:41:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:39.717 18:41:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:10:39.717 18:41:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:10:40.657 [2024-12-15 18:41:40.813202] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:10:40.657 [2024-12-15 18:41:40.813316] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:10:40.657 [2024-12-15 18:41:40.813357] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:40.657 18:41:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:10:40.657 18:41:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:10:40.657 18:41:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:40.657 18:41:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:10:40.657 18:41:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:10:40.657 18:41:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:40.657 18:41:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.657 18:41:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:40.657 18:41:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.657 18:41:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.657 18:41:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.917 18:41:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:40.917 "name": "raid_bdev1", 00:10:40.917 "uuid": "f0387408-8914-4c3c-9565-7520657438a5", 00:10:40.917 "strip_size_kb": 0, 00:10:40.917 "state": "online", 00:10:40.917 "raid_level": "raid1", 00:10:40.917 "superblock": false, 00:10:40.917 "num_base_bdevs": 2, 00:10:40.917 "num_base_bdevs_discovered": 2, 00:10:40.917 "num_base_bdevs_operational": 2, 00:10:40.917 "base_bdevs_list": [ 00:10:40.917 { 00:10:40.917 "name": "spare", 00:10:40.917 "uuid": "c3219867-ec68-5eb0-ad43-8087e6c146ea", 00:10:40.918 "is_configured": true, 00:10:40.918 "data_offset": 0, 00:10:40.918 "data_size": 65536 00:10:40.918 }, 00:10:40.918 { 00:10:40.918 "name": "BaseBdev2", 00:10:40.918 "uuid": "2ae932bc-f67d-59da-a8ef-c167c34942c0", 00:10:40.918 "is_configured": true, 00:10:40.918 "data_offset": 0, 00:10:40.918 "data_size": 65536 00:10:40.918 } 00:10:40.918 ] 00:10:40.918 }' 00:10:40.918 18:41:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:40.918 18:41:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:10:40.918 18:41:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:40.918 18:41:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:10:40.918 18:41:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:10:40.918 18:41:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:10:40.918 18:41:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:40.918 18:41:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:10:40.918 18:41:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:10:40.918 18:41:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:40.918 18:41:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.918 18:41:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.918 18:41:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.918 18:41:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:40.918 18:41:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.918 18:41:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:40.918 "name": "raid_bdev1", 00:10:40.918 "uuid": "f0387408-8914-4c3c-9565-7520657438a5", 00:10:40.918 "strip_size_kb": 0, 00:10:40.918 "state": "online", 00:10:40.918 "raid_level": "raid1", 00:10:40.918 "superblock": false, 00:10:40.918 "num_base_bdevs": 2, 00:10:40.918 "num_base_bdevs_discovered": 2, 00:10:40.918 "num_base_bdevs_operational": 2, 00:10:40.918 "base_bdevs_list": [ 00:10:40.918 { 00:10:40.918 "name": "spare", 00:10:40.918 "uuid": "c3219867-ec68-5eb0-ad43-8087e6c146ea", 00:10:40.918 "is_configured": true, 00:10:40.918 "data_offset": 0, 00:10:40.918 "data_size": 65536 00:10:40.918 }, 00:10:40.918 { 00:10:40.918 "name": "BaseBdev2", 00:10:40.918 "uuid": "2ae932bc-f67d-59da-a8ef-c167c34942c0", 00:10:40.918 "is_configured": true, 00:10:40.918 "data_offset": 0, 00:10:40.918 "data_size": 65536 00:10:40.918 } 00:10:40.918 ] 00:10:40.918 }' 00:10:40.918 18:41:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:40.918 18:41:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:10:40.918 18:41:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:40.918 18:41:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:10:40.918 18:41:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:40.918 18:41:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:40.918 18:41:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:40.918 18:41:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:40.918 18:41:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:40.918 18:41:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:40.918 18:41:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:40.918 18:41:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:40.918 18:41:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:40.918 18:41:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:40.918 18:41:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.918 18:41:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.918 18:41:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.918 18:41:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:40.918 18:41:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.179 18:41:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:41.179 "name": "raid_bdev1", 00:10:41.179 "uuid": "f0387408-8914-4c3c-9565-7520657438a5", 00:10:41.179 "strip_size_kb": 0, 00:10:41.179 "state": "online", 00:10:41.179 "raid_level": "raid1", 00:10:41.179 "superblock": false, 00:10:41.179 "num_base_bdevs": 2, 00:10:41.179 "num_base_bdevs_discovered": 2, 00:10:41.179 "num_base_bdevs_operational": 2, 00:10:41.179 "base_bdevs_list": [ 00:10:41.179 { 00:10:41.179 "name": "spare", 00:10:41.179 "uuid": "c3219867-ec68-5eb0-ad43-8087e6c146ea", 00:10:41.179 "is_configured": true, 00:10:41.179 "data_offset": 0, 00:10:41.179 "data_size": 65536 00:10:41.179 }, 00:10:41.179 { 00:10:41.179 "name": "BaseBdev2", 00:10:41.179 "uuid": "2ae932bc-f67d-59da-a8ef-c167c34942c0", 00:10:41.179 "is_configured": true, 00:10:41.179 "data_offset": 0, 00:10:41.179 "data_size": 65536 00:10:41.179 } 00:10:41.179 ] 00:10:41.179 }' 00:10:41.179 18:41:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:41.179 18:41:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.439 18:41:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:41.439 18:41:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.439 18:41:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.439 [2024-12-15 18:41:41.724671] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:41.439 [2024-12-15 18:41:41.724706] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:41.439 [2024-12-15 18:41:41.724796] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:41.439 [2024-12-15 18:41:41.724874] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:41.439 [2024-12-15 18:41:41.724887] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:10:41.439 18:41:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.439 18:41:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.440 18:41:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:10:41.440 18:41:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.440 18:41:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.440 18:41:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.440 18:41:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:10:41.440 18:41:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:10:41.440 18:41:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:10:41.440 18:41:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:10:41.440 18:41:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:10:41.440 18:41:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:10:41.440 18:41:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:41.440 18:41:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:41.440 18:41:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:41.440 18:41:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:10:41.440 18:41:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:41.440 18:41:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:41.440 18:41:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:10:41.709 /dev/nbd0 00:10:41.709 18:41:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:41.709 18:41:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:41.709 18:41:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:10:41.709 18:41:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:10:41.709 18:41:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:41.710 18:41:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:41.710 18:41:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:10:41.710 18:41:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:10:41.710 18:41:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:41.710 18:41:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:41.710 18:41:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:41.710 1+0 records in 00:10:41.710 1+0 records out 00:10:41.710 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0004076 s, 10.0 MB/s 00:10:41.710 18:41:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:41.710 18:41:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:10:41.710 18:41:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:41.710 18:41:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:41.710 18:41:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:10:41.710 18:41:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:41.710 18:41:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:41.710 18:41:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:10:41.985 /dev/nbd1 00:10:41.985 18:41:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:10:41.985 18:41:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:10:41.985 18:41:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:10:41.985 18:41:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:10:41.985 18:41:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:41.986 18:41:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:41.986 18:41:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:10:41.986 18:41:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:10:41.986 18:41:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:41.986 18:41:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:41.986 18:41:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:41.986 1+0 records in 00:10:41.986 1+0 records out 00:10:41.986 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000461684 s, 8.9 MB/s 00:10:41.986 18:41:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:41.986 18:41:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:10:41.986 18:41:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:41.986 18:41:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:41.986 18:41:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:10:41.986 18:41:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:41.986 18:41:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:41.986 18:41:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:10:41.986 18:41:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:10:41.986 18:41:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:10:41.986 18:41:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:41.986 18:41:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:41.986 18:41:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:10:41.986 18:41:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:41.986 18:41:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:10:42.246 18:41:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:42.246 18:41:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:42.246 18:41:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:42.246 18:41:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:42.246 18:41:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:42.246 18:41:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:42.246 18:41:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:10:42.246 18:41:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:10:42.246 18:41:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:42.246 18:41:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:10:42.508 18:41:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:42.508 18:41:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:42.508 18:41:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:42.508 18:41:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:42.508 18:41:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:42.508 18:41:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:42.508 18:41:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:10:42.508 18:41:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:10:42.508 18:41:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:10:42.508 18:41:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 87938 00:10:42.508 18:41:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 87938 ']' 00:10:42.508 18:41:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 87938 00:10:42.508 18:41:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:10:42.508 18:41:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:42.508 18:41:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87938 00:10:42.508 18:41:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:42.508 18:41:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:42.508 killing process with pid 87938 00:10:42.508 18:41:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87938' 00:10:42.508 18:41:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 87938 00:10:42.508 Received shutdown signal, test time was about 60.000000 seconds 00:10:42.508 00:10:42.508 Latency(us) 00:10:42.508 [2024-12-15T18:41:42.949Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:42.508 [2024-12-15T18:41:42.949Z] =================================================================================================================== 00:10:42.508 [2024-12-15T18:41:42.949Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:10:42.508 [2024-12-15 18:41:42.803352] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:42.508 18:41:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 87938 00:10:42.508 [2024-12-15 18:41:42.834868] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:42.769 18:41:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:10:42.769 00:10:42.769 real 0m13.773s 00:10:42.769 user 0m15.539s 00:10:42.769 sys 0m3.096s 00:10:42.769 18:41:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:42.769 18:41:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.769 ************************************ 00:10:42.769 END TEST raid_rebuild_test 00:10:42.769 ************************************ 00:10:42.769 18:41:43 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:10:42.769 18:41:43 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:10:42.769 18:41:43 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:42.769 18:41:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:42.769 ************************************ 00:10:42.769 START TEST raid_rebuild_test_sb 00:10:42.769 ************************************ 00:10:42.769 18:41:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:10:42.769 18:41:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:10:42.769 18:41:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:10:42.769 18:41:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:10:42.769 18:41:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:10:42.769 18:41:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:10:42.769 18:41:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:10:42.769 18:41:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:10:42.769 18:41:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:10:42.769 18:41:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:10:42.769 18:41:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:10:42.769 18:41:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:10:42.769 18:41:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:10:42.769 18:41:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:10:42.769 18:41:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:42.769 18:41:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:10:42.769 18:41:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:10:42.769 18:41:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:10:42.769 18:41:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:10:42.769 18:41:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:10:42.769 18:41:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:10:42.769 18:41:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:10:42.769 18:41:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:10:42.769 18:41:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:10:42.769 18:41:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:10:42.769 18:41:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=88339 00:10:42.769 18:41:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:10:42.769 18:41:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 88339 00:10:42.769 18:41:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 88339 ']' 00:10:42.769 18:41:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:42.769 18:41:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:42.769 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:42.769 18:41:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:42.769 18:41:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:42.769 18:41:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.030 I/O size of 3145728 is greater than zero copy threshold (65536). 00:10:43.030 Zero copy mechanism will not be used. 00:10:43.030 [2024-12-15 18:41:43.216309] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:10:43.030 [2024-12-15 18:41:43.216454] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88339 ] 00:10:43.030 [2024-12-15 18:41:43.387444] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:43.030 [2024-12-15 18:41:43.414393] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:43.030 [2024-12-15 18:41:43.457471] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:43.030 [2024-12-15 18:41:43.457516] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:43.970 18:41:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:43.970 18:41:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:10:43.970 18:41:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:10:43.970 18:41:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:43.970 18:41:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.970 18:41:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.970 BaseBdev1_malloc 00:10:43.970 18:41:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.970 18:41:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:10:43.970 18:41:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.971 18:41:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.971 [2024-12-15 18:41:44.105078] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:10:43.971 [2024-12-15 18:41:44.105146] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:43.971 [2024-12-15 18:41:44.105176] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:43.971 [2024-12-15 18:41:44.105190] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:43.971 [2024-12-15 18:41:44.107311] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:43.971 [2024-12-15 18:41:44.107344] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:43.971 BaseBdev1 00:10:43.971 18:41:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.971 18:41:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:10:43.971 18:41:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:43.971 18:41:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.971 18:41:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.971 BaseBdev2_malloc 00:10:43.971 18:41:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.971 18:41:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:10:43.971 18:41:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.971 18:41:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.971 [2024-12-15 18:41:44.133581] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:10:43.971 [2024-12-15 18:41:44.133630] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:43.971 [2024-12-15 18:41:44.133650] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:43.971 [2024-12-15 18:41:44.133658] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:43.971 [2024-12-15 18:41:44.135676] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:43.971 [2024-12-15 18:41:44.135709] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:43.971 BaseBdev2 00:10:43.971 18:41:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.971 18:41:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:10:43.971 18:41:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.971 18:41:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.971 spare_malloc 00:10:43.971 18:41:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.971 18:41:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:10:43.971 18:41:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.971 18:41:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.971 spare_delay 00:10:43.971 18:41:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.971 18:41:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:10:43.971 18:41:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.971 18:41:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.971 [2024-12-15 18:41:44.170069] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:10:43.971 [2024-12-15 18:41:44.170122] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:43.971 [2024-12-15 18:41:44.170142] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:10:43.971 [2024-12-15 18:41:44.170151] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:43.971 [2024-12-15 18:41:44.172186] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:43.971 [2024-12-15 18:41:44.172218] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:10:43.971 spare 00:10:43.971 18:41:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.971 18:41:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:10:43.971 18:41:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.971 18:41:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.971 [2024-12-15 18:41:44.178085] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:43.971 [2024-12-15 18:41:44.179814] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:43.971 [2024-12-15 18:41:44.179961] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:10:43.971 [2024-12-15 18:41:44.179980] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:43.971 [2024-12-15 18:41:44.180214] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:10:43.971 [2024-12-15 18:41:44.180351] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:10:43.971 [2024-12-15 18:41:44.180375] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:10:43.971 [2024-12-15 18:41:44.180506] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:43.971 18:41:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.971 18:41:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:43.971 18:41:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:43.971 18:41:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:43.971 18:41:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:43.971 18:41:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:43.971 18:41:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:43.971 18:41:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:43.971 18:41:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:43.971 18:41:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:43.971 18:41:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:43.971 18:41:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.971 18:41:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.971 18:41:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:43.971 18:41:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.971 18:41:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.971 18:41:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:43.971 "name": "raid_bdev1", 00:10:43.971 "uuid": "e5e2bdc1-01de-46d0-9612-c7e9a485b82e", 00:10:43.971 "strip_size_kb": 0, 00:10:43.971 "state": "online", 00:10:43.971 "raid_level": "raid1", 00:10:43.971 "superblock": true, 00:10:43.971 "num_base_bdevs": 2, 00:10:43.971 "num_base_bdevs_discovered": 2, 00:10:43.971 "num_base_bdevs_operational": 2, 00:10:43.971 "base_bdevs_list": [ 00:10:43.971 { 00:10:43.971 "name": "BaseBdev1", 00:10:43.971 "uuid": "8de5f0d7-9ab8-505e-856d-102ccb125379", 00:10:43.971 "is_configured": true, 00:10:43.971 "data_offset": 2048, 00:10:43.971 "data_size": 63488 00:10:43.971 }, 00:10:43.971 { 00:10:43.971 "name": "BaseBdev2", 00:10:43.971 "uuid": "55eb14e1-140f-5802-8cf5-d25a7e698db2", 00:10:43.971 "is_configured": true, 00:10:43.971 "data_offset": 2048, 00:10:43.971 "data_size": 63488 00:10:43.971 } 00:10:43.971 ] 00:10:43.971 }' 00:10:43.971 18:41:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:43.971 18:41:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.231 18:41:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:44.231 18:41:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:10:44.231 18:41:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.231 18:41:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.231 [2024-12-15 18:41:44.645584] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:44.231 18:41:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.491 18:41:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:10:44.491 18:41:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.491 18:41:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:10:44.491 18:41:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.491 18:41:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.491 18:41:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.491 18:41:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:10:44.491 18:41:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:10:44.491 18:41:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:10:44.491 18:41:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:10:44.491 18:41:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:10:44.491 18:41:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:10:44.491 18:41:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:10:44.491 18:41:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:44.491 18:41:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:10:44.491 18:41:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:44.491 18:41:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:10:44.491 18:41:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:44.491 18:41:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:10:44.491 18:41:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:10:44.491 [2024-12-15 18:41:44.924966] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:44.749 /dev/nbd0 00:10:44.749 18:41:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:44.749 18:41:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:44.749 18:41:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:10:44.749 18:41:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:10:44.749 18:41:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:44.749 18:41:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:44.749 18:41:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:10:44.749 18:41:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:10:44.749 18:41:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:44.749 18:41:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:44.749 18:41:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:44.749 1+0 records in 00:10:44.749 1+0 records out 00:10:44.749 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000376096 s, 10.9 MB/s 00:10:44.749 18:41:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:44.749 18:41:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:10:44.749 18:41:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:44.749 18:41:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:44.750 18:41:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:10:44.750 18:41:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:44.750 18:41:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:10:44.750 18:41:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:10:44.750 18:41:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:10:44.750 18:41:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:10:48.933 63488+0 records in 00:10:48.933 63488+0 records out 00:10:48.933 32505856 bytes (33 MB, 31 MiB) copied, 3.62185 s, 9.0 MB/s 00:10:48.933 18:41:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:10:48.933 18:41:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:10:48.934 18:41:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:10:48.934 18:41:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:48.934 18:41:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:10:48.934 18:41:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:48.934 18:41:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:10:48.934 18:41:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:48.934 [2024-12-15 18:41:48.871896] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:48.934 18:41:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:48.934 18:41:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:48.934 18:41:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:48.934 18:41:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:48.934 18:41:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:48.934 18:41:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:10:48.934 18:41:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:10:48.934 18:41:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:10:48.934 18:41:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.934 18:41:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.934 [2024-12-15 18:41:48.887954] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:48.934 18:41:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.934 18:41:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:10:48.934 18:41:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:48.934 18:41:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:48.934 18:41:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:48.934 18:41:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:48.934 18:41:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:48.934 18:41:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:48.934 18:41:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:48.934 18:41:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:48.934 18:41:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:48.934 18:41:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.934 18:41:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.934 18:41:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:48.934 18:41:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.934 18:41:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.934 18:41:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:48.934 "name": "raid_bdev1", 00:10:48.934 "uuid": "e5e2bdc1-01de-46d0-9612-c7e9a485b82e", 00:10:48.934 "strip_size_kb": 0, 00:10:48.934 "state": "online", 00:10:48.934 "raid_level": "raid1", 00:10:48.934 "superblock": true, 00:10:48.934 "num_base_bdevs": 2, 00:10:48.934 "num_base_bdevs_discovered": 1, 00:10:48.934 "num_base_bdevs_operational": 1, 00:10:48.934 "base_bdevs_list": [ 00:10:48.934 { 00:10:48.934 "name": null, 00:10:48.934 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:48.934 "is_configured": false, 00:10:48.934 "data_offset": 0, 00:10:48.934 "data_size": 63488 00:10:48.934 }, 00:10:48.934 { 00:10:48.934 "name": "BaseBdev2", 00:10:48.934 "uuid": "55eb14e1-140f-5802-8cf5-d25a7e698db2", 00:10:48.934 "is_configured": true, 00:10:48.934 "data_offset": 2048, 00:10:48.934 "data_size": 63488 00:10:48.934 } 00:10:48.934 ] 00:10:48.934 }' 00:10:48.934 18:41:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:48.934 18:41:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.934 18:41:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:10:48.934 18:41:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.934 18:41:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.934 [2024-12-15 18:41:49.347209] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:10:48.934 [2024-12-15 18:41:49.352158] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca31c0 00:10:48.934 18:41:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.934 [2024-12-15 18:41:49.354039] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:10:48.934 18:41:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:10:50.312 18:41:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:10:50.312 18:41:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:50.312 18:41:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:10:50.312 18:41:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:10:50.312 18:41:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:50.312 18:41:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.312 18:41:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.312 18:41:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:50.312 18:41:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.312 18:41:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.312 18:41:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:50.312 "name": "raid_bdev1", 00:10:50.312 "uuid": "e5e2bdc1-01de-46d0-9612-c7e9a485b82e", 00:10:50.312 "strip_size_kb": 0, 00:10:50.312 "state": "online", 00:10:50.312 "raid_level": "raid1", 00:10:50.312 "superblock": true, 00:10:50.312 "num_base_bdevs": 2, 00:10:50.312 "num_base_bdevs_discovered": 2, 00:10:50.312 "num_base_bdevs_operational": 2, 00:10:50.312 "process": { 00:10:50.312 "type": "rebuild", 00:10:50.312 "target": "spare", 00:10:50.312 "progress": { 00:10:50.312 "blocks": 20480, 00:10:50.312 "percent": 32 00:10:50.312 } 00:10:50.312 }, 00:10:50.312 "base_bdevs_list": [ 00:10:50.312 { 00:10:50.312 "name": "spare", 00:10:50.312 "uuid": "572d8461-ebf7-5b5c-82af-ef6da036a1ba", 00:10:50.312 "is_configured": true, 00:10:50.312 "data_offset": 2048, 00:10:50.312 "data_size": 63488 00:10:50.312 }, 00:10:50.312 { 00:10:50.312 "name": "BaseBdev2", 00:10:50.312 "uuid": "55eb14e1-140f-5802-8cf5-d25a7e698db2", 00:10:50.312 "is_configured": true, 00:10:50.312 "data_offset": 2048, 00:10:50.312 "data_size": 63488 00:10:50.312 } 00:10:50.312 ] 00:10:50.312 }' 00:10:50.312 18:41:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:50.312 18:41:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:10:50.312 18:41:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:50.312 18:41:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:10:50.312 18:41:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:10:50.312 18:41:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.312 18:41:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.312 [2024-12-15 18:41:50.498485] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:10:50.312 [2024-12-15 18:41:50.559455] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:10:50.312 [2024-12-15 18:41:50.559519] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:50.312 [2024-12-15 18:41:50.559539] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:10:50.312 [2024-12-15 18:41:50.559553] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:10:50.312 18:41:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.312 18:41:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:10:50.312 18:41:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:50.312 18:41:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:50.312 18:41:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:50.312 18:41:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:50.312 18:41:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:50.312 18:41:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:50.312 18:41:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:50.312 18:41:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:50.312 18:41:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:50.312 18:41:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.312 18:41:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:50.312 18:41:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.312 18:41:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.312 18:41:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.312 18:41:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:50.312 "name": "raid_bdev1", 00:10:50.312 "uuid": "e5e2bdc1-01de-46d0-9612-c7e9a485b82e", 00:10:50.312 "strip_size_kb": 0, 00:10:50.312 "state": "online", 00:10:50.312 "raid_level": "raid1", 00:10:50.312 "superblock": true, 00:10:50.312 "num_base_bdevs": 2, 00:10:50.312 "num_base_bdevs_discovered": 1, 00:10:50.312 "num_base_bdevs_operational": 1, 00:10:50.312 "base_bdevs_list": [ 00:10:50.312 { 00:10:50.312 "name": null, 00:10:50.312 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:50.312 "is_configured": false, 00:10:50.312 "data_offset": 0, 00:10:50.312 "data_size": 63488 00:10:50.312 }, 00:10:50.312 { 00:10:50.312 "name": "BaseBdev2", 00:10:50.312 "uuid": "55eb14e1-140f-5802-8cf5-d25a7e698db2", 00:10:50.312 "is_configured": true, 00:10:50.312 "data_offset": 2048, 00:10:50.312 "data_size": 63488 00:10:50.312 } 00:10:50.312 ] 00:10:50.312 }' 00:10:50.312 18:41:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:50.312 18:41:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.882 18:41:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:10:50.882 18:41:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:50.882 18:41:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:10:50.882 18:41:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:10:50.882 18:41:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:50.882 18:41:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.882 18:41:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:50.882 18:41:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.882 18:41:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.882 18:41:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.882 18:41:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:50.882 "name": "raid_bdev1", 00:10:50.882 "uuid": "e5e2bdc1-01de-46d0-9612-c7e9a485b82e", 00:10:50.882 "strip_size_kb": 0, 00:10:50.882 "state": "online", 00:10:50.882 "raid_level": "raid1", 00:10:50.882 "superblock": true, 00:10:50.882 "num_base_bdevs": 2, 00:10:50.882 "num_base_bdevs_discovered": 1, 00:10:50.882 "num_base_bdevs_operational": 1, 00:10:50.882 "base_bdevs_list": [ 00:10:50.882 { 00:10:50.882 "name": null, 00:10:50.882 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:50.882 "is_configured": false, 00:10:50.882 "data_offset": 0, 00:10:50.882 "data_size": 63488 00:10:50.882 }, 00:10:50.882 { 00:10:50.882 "name": "BaseBdev2", 00:10:50.882 "uuid": "55eb14e1-140f-5802-8cf5-d25a7e698db2", 00:10:50.882 "is_configured": true, 00:10:50.882 "data_offset": 2048, 00:10:50.882 "data_size": 63488 00:10:50.882 } 00:10:50.882 ] 00:10:50.882 }' 00:10:50.882 18:41:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:50.882 18:41:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:10:50.882 18:41:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:50.882 18:41:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:10:50.882 18:41:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:10:50.882 18:41:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.882 18:41:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.882 [2024-12-15 18:41:51.180273] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:10:50.882 [2024-12-15 18:41:51.185226] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3290 00:10:50.882 18:41:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.882 18:41:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:10:50.882 [2024-12-15 18:41:51.187062] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:10:51.829 18:41:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:10:51.829 18:41:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:51.829 18:41:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:10:51.829 18:41:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:10:51.829 18:41:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:51.829 18:41:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:51.829 18:41:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:51.829 18:41:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.829 18:41:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.829 18:41:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.829 18:41:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:51.829 "name": "raid_bdev1", 00:10:51.829 "uuid": "e5e2bdc1-01de-46d0-9612-c7e9a485b82e", 00:10:51.829 "strip_size_kb": 0, 00:10:51.829 "state": "online", 00:10:51.829 "raid_level": "raid1", 00:10:51.829 "superblock": true, 00:10:51.829 "num_base_bdevs": 2, 00:10:51.829 "num_base_bdevs_discovered": 2, 00:10:51.829 "num_base_bdevs_operational": 2, 00:10:51.829 "process": { 00:10:51.829 "type": "rebuild", 00:10:51.829 "target": "spare", 00:10:51.829 "progress": { 00:10:51.829 "blocks": 20480, 00:10:51.830 "percent": 32 00:10:51.830 } 00:10:51.830 }, 00:10:51.830 "base_bdevs_list": [ 00:10:51.830 { 00:10:51.830 "name": "spare", 00:10:51.830 "uuid": "572d8461-ebf7-5b5c-82af-ef6da036a1ba", 00:10:51.830 "is_configured": true, 00:10:51.830 "data_offset": 2048, 00:10:51.830 "data_size": 63488 00:10:51.830 }, 00:10:51.830 { 00:10:51.830 "name": "BaseBdev2", 00:10:51.830 "uuid": "55eb14e1-140f-5802-8cf5-d25a7e698db2", 00:10:51.830 "is_configured": true, 00:10:51.830 "data_offset": 2048, 00:10:51.830 "data_size": 63488 00:10:51.830 } 00:10:51.830 ] 00:10:51.830 }' 00:10:51.830 18:41:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:52.100 18:41:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:10:52.100 18:41:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:52.100 18:41:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:10:52.100 18:41:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:10:52.100 18:41:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:10:52.100 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:10:52.100 18:41:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:10:52.100 18:41:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:10:52.100 18:41:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:10:52.100 18:41:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=309 00:10:52.100 18:41:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:10:52.100 18:41:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:10:52.100 18:41:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:52.100 18:41:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:10:52.100 18:41:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:10:52.100 18:41:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:52.100 18:41:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:52.100 18:41:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.100 18:41:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.100 18:41:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.100 18:41:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.100 18:41:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:52.100 "name": "raid_bdev1", 00:10:52.100 "uuid": "e5e2bdc1-01de-46d0-9612-c7e9a485b82e", 00:10:52.100 "strip_size_kb": 0, 00:10:52.101 "state": "online", 00:10:52.101 "raid_level": "raid1", 00:10:52.101 "superblock": true, 00:10:52.101 "num_base_bdevs": 2, 00:10:52.101 "num_base_bdevs_discovered": 2, 00:10:52.101 "num_base_bdevs_operational": 2, 00:10:52.101 "process": { 00:10:52.101 "type": "rebuild", 00:10:52.101 "target": "spare", 00:10:52.101 "progress": { 00:10:52.101 "blocks": 22528, 00:10:52.101 "percent": 35 00:10:52.101 } 00:10:52.101 }, 00:10:52.101 "base_bdevs_list": [ 00:10:52.101 { 00:10:52.101 "name": "spare", 00:10:52.101 "uuid": "572d8461-ebf7-5b5c-82af-ef6da036a1ba", 00:10:52.101 "is_configured": true, 00:10:52.101 "data_offset": 2048, 00:10:52.101 "data_size": 63488 00:10:52.101 }, 00:10:52.101 { 00:10:52.101 "name": "BaseBdev2", 00:10:52.101 "uuid": "55eb14e1-140f-5802-8cf5-d25a7e698db2", 00:10:52.101 "is_configured": true, 00:10:52.101 "data_offset": 2048, 00:10:52.101 "data_size": 63488 00:10:52.101 } 00:10:52.101 ] 00:10:52.101 }' 00:10:52.101 18:41:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:52.101 18:41:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:10:52.101 18:41:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:52.101 18:41:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:10:52.101 18:41:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:10:53.040 18:41:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:10:53.040 18:41:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:10:53.040 18:41:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:53.040 18:41:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:10:53.040 18:41:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:10:53.040 18:41:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:53.040 18:41:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.040 18:41:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:53.040 18:41:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.040 18:41:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.299 18:41:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.299 18:41:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:53.299 "name": "raid_bdev1", 00:10:53.299 "uuid": "e5e2bdc1-01de-46d0-9612-c7e9a485b82e", 00:10:53.299 "strip_size_kb": 0, 00:10:53.299 "state": "online", 00:10:53.299 "raid_level": "raid1", 00:10:53.299 "superblock": true, 00:10:53.299 "num_base_bdevs": 2, 00:10:53.299 "num_base_bdevs_discovered": 2, 00:10:53.299 "num_base_bdevs_operational": 2, 00:10:53.299 "process": { 00:10:53.299 "type": "rebuild", 00:10:53.299 "target": "spare", 00:10:53.299 "progress": { 00:10:53.299 "blocks": 45056, 00:10:53.300 "percent": 70 00:10:53.300 } 00:10:53.300 }, 00:10:53.300 "base_bdevs_list": [ 00:10:53.300 { 00:10:53.300 "name": "spare", 00:10:53.300 "uuid": "572d8461-ebf7-5b5c-82af-ef6da036a1ba", 00:10:53.300 "is_configured": true, 00:10:53.300 "data_offset": 2048, 00:10:53.300 "data_size": 63488 00:10:53.300 }, 00:10:53.300 { 00:10:53.300 "name": "BaseBdev2", 00:10:53.300 "uuid": "55eb14e1-140f-5802-8cf5-d25a7e698db2", 00:10:53.300 "is_configured": true, 00:10:53.300 "data_offset": 2048, 00:10:53.300 "data_size": 63488 00:10:53.300 } 00:10:53.300 ] 00:10:53.300 }' 00:10:53.300 18:41:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:53.300 18:41:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:10:53.300 18:41:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:53.300 18:41:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:10:53.300 18:41:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:10:53.869 [2024-12-15 18:41:54.298949] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:10:53.869 [2024-12-15 18:41:54.299055] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:10:53.869 [2024-12-15 18:41:54.299173] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:54.438 18:41:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:10:54.438 18:41:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:10:54.438 18:41:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:54.438 18:41:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:10:54.438 18:41:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:10:54.438 18:41:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:54.438 18:41:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.438 18:41:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.438 18:41:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:54.438 18:41:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.438 18:41:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.438 18:41:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:54.438 "name": "raid_bdev1", 00:10:54.438 "uuid": "e5e2bdc1-01de-46d0-9612-c7e9a485b82e", 00:10:54.438 "strip_size_kb": 0, 00:10:54.438 "state": "online", 00:10:54.438 "raid_level": "raid1", 00:10:54.438 "superblock": true, 00:10:54.438 "num_base_bdevs": 2, 00:10:54.438 "num_base_bdevs_discovered": 2, 00:10:54.438 "num_base_bdevs_operational": 2, 00:10:54.438 "base_bdevs_list": [ 00:10:54.438 { 00:10:54.438 "name": "spare", 00:10:54.438 "uuid": "572d8461-ebf7-5b5c-82af-ef6da036a1ba", 00:10:54.438 "is_configured": true, 00:10:54.438 "data_offset": 2048, 00:10:54.438 "data_size": 63488 00:10:54.438 }, 00:10:54.438 { 00:10:54.438 "name": "BaseBdev2", 00:10:54.438 "uuid": "55eb14e1-140f-5802-8cf5-d25a7e698db2", 00:10:54.438 "is_configured": true, 00:10:54.438 "data_offset": 2048, 00:10:54.438 "data_size": 63488 00:10:54.438 } 00:10:54.438 ] 00:10:54.438 }' 00:10:54.438 18:41:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:54.438 18:41:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:10:54.438 18:41:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:54.438 18:41:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:10:54.438 18:41:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:10:54.438 18:41:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:10:54.438 18:41:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:54.438 18:41:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:10:54.438 18:41:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:10:54.438 18:41:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:54.438 18:41:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.438 18:41:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.438 18:41:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:54.438 18:41:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.438 18:41:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.438 18:41:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:54.438 "name": "raid_bdev1", 00:10:54.438 "uuid": "e5e2bdc1-01de-46d0-9612-c7e9a485b82e", 00:10:54.438 "strip_size_kb": 0, 00:10:54.438 "state": "online", 00:10:54.438 "raid_level": "raid1", 00:10:54.438 "superblock": true, 00:10:54.438 "num_base_bdevs": 2, 00:10:54.438 "num_base_bdevs_discovered": 2, 00:10:54.438 "num_base_bdevs_operational": 2, 00:10:54.438 "base_bdevs_list": [ 00:10:54.438 { 00:10:54.438 "name": "spare", 00:10:54.438 "uuid": "572d8461-ebf7-5b5c-82af-ef6da036a1ba", 00:10:54.438 "is_configured": true, 00:10:54.438 "data_offset": 2048, 00:10:54.438 "data_size": 63488 00:10:54.438 }, 00:10:54.438 { 00:10:54.438 "name": "BaseBdev2", 00:10:54.438 "uuid": "55eb14e1-140f-5802-8cf5-d25a7e698db2", 00:10:54.438 "is_configured": true, 00:10:54.438 "data_offset": 2048, 00:10:54.438 "data_size": 63488 00:10:54.438 } 00:10:54.438 ] 00:10:54.438 }' 00:10:54.438 18:41:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:54.438 18:41:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:10:54.439 18:41:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:54.698 18:41:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:10:54.698 18:41:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:54.698 18:41:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:54.698 18:41:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:54.698 18:41:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:54.698 18:41:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:54.698 18:41:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:54.698 18:41:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:54.698 18:41:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:54.698 18:41:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:54.698 18:41:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:54.698 18:41:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.698 18:41:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.698 18:41:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.698 18:41:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:54.698 18:41:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.698 18:41:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:54.698 "name": "raid_bdev1", 00:10:54.698 "uuid": "e5e2bdc1-01de-46d0-9612-c7e9a485b82e", 00:10:54.698 "strip_size_kb": 0, 00:10:54.698 "state": "online", 00:10:54.698 "raid_level": "raid1", 00:10:54.698 "superblock": true, 00:10:54.698 "num_base_bdevs": 2, 00:10:54.698 "num_base_bdevs_discovered": 2, 00:10:54.698 "num_base_bdevs_operational": 2, 00:10:54.698 "base_bdevs_list": [ 00:10:54.698 { 00:10:54.698 "name": "spare", 00:10:54.698 "uuid": "572d8461-ebf7-5b5c-82af-ef6da036a1ba", 00:10:54.698 "is_configured": true, 00:10:54.698 "data_offset": 2048, 00:10:54.698 "data_size": 63488 00:10:54.698 }, 00:10:54.698 { 00:10:54.698 "name": "BaseBdev2", 00:10:54.698 "uuid": "55eb14e1-140f-5802-8cf5-d25a7e698db2", 00:10:54.698 "is_configured": true, 00:10:54.698 "data_offset": 2048, 00:10:54.698 "data_size": 63488 00:10:54.698 } 00:10:54.698 ] 00:10:54.698 }' 00:10:54.698 18:41:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:54.698 18:41:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.957 18:41:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:54.957 18:41:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.957 18:41:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.957 [2024-12-15 18:41:55.358266] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:54.957 [2024-12-15 18:41:55.358300] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:54.957 [2024-12-15 18:41:55.358406] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:54.957 [2024-12-15 18:41:55.358490] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:54.957 [2024-12-15 18:41:55.358503] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:10:54.957 18:41:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.957 18:41:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:10:54.957 18:41:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.957 18:41:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.957 18:41:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.957 18:41:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.217 18:41:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:10:55.217 18:41:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:10:55.217 18:41:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:10:55.217 18:41:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:10:55.217 18:41:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:10:55.217 18:41:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:10:55.217 18:41:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:55.217 18:41:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:55.217 18:41:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:55.217 18:41:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:10:55.217 18:41:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:55.217 18:41:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:55.217 18:41:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:10:55.217 /dev/nbd0 00:10:55.217 18:41:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:55.217 18:41:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:55.217 18:41:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:10:55.217 18:41:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:10:55.217 18:41:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:55.217 18:41:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:55.217 18:41:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:10:55.217 18:41:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:10:55.217 18:41:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:55.217 18:41:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:55.217 18:41:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:55.217 1+0 records in 00:10:55.217 1+0 records out 00:10:55.217 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000424338 s, 9.7 MB/s 00:10:55.217 18:41:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:55.217 18:41:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:10:55.217 18:41:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:55.217 18:41:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:55.217 18:41:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:10:55.217 18:41:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:55.217 18:41:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:55.217 18:41:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:10:55.476 /dev/nbd1 00:10:55.735 18:41:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:10:55.735 18:41:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:10:55.735 18:41:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:10:55.735 18:41:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:10:55.735 18:41:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:55.735 18:41:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:55.735 18:41:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:10:55.735 18:41:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:10:55.735 18:41:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:55.735 18:41:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:55.735 18:41:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:55.735 1+0 records in 00:10:55.735 1+0 records out 00:10:55.735 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00029969 s, 13.7 MB/s 00:10:55.735 18:41:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:55.735 18:41:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:10:55.735 18:41:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:55.735 18:41:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:55.735 18:41:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:10:55.735 18:41:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:55.735 18:41:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:55.735 18:41:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:10:55.735 18:41:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:10:55.735 18:41:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:10:55.735 18:41:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:55.735 18:41:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:55.735 18:41:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:10:55.735 18:41:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:55.735 18:41:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:10:55.994 18:41:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:55.994 18:41:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:55.994 18:41:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:55.994 18:41:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:55.994 18:41:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:55.994 18:41:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:55.994 18:41:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:10:55.994 18:41:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:10:55.994 18:41:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:55.994 18:41:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:10:56.254 18:41:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:56.254 18:41:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:56.254 18:41:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:56.254 18:41:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:56.254 18:41:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:56.254 18:41:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:56.254 18:41:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:10:56.254 18:41:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:10:56.254 18:41:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:10:56.254 18:41:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:10:56.254 18:41:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.254 18:41:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.254 18:41:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.254 18:41:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:10:56.254 18:41:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.254 18:41:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.254 [2024-12-15 18:41:56.506349] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:10:56.254 [2024-12-15 18:41:56.506410] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:56.254 [2024-12-15 18:41:56.506431] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:56.254 [2024-12-15 18:41:56.506444] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:56.254 [2024-12-15 18:41:56.508602] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:56.254 [2024-12-15 18:41:56.508638] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:10:56.254 [2024-12-15 18:41:56.508720] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:10:56.254 [2024-12-15 18:41:56.508760] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:10:56.254 [2024-12-15 18:41:56.508894] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:56.254 spare 00:10:56.254 18:41:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.254 18:41:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:10:56.254 18:41:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.254 18:41:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.254 [2024-12-15 18:41:56.608787] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:10:56.254 [2024-12-15 18:41:56.608823] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:56.254 [2024-12-15 18:41:56.609165] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1940 00:10:56.254 [2024-12-15 18:41:56.609333] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:10:56.254 [2024-12-15 18:41:56.609353] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006600 00:10:56.254 [2024-12-15 18:41:56.609491] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:56.254 18:41:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.254 18:41:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:56.254 18:41:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:56.254 18:41:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:56.254 18:41:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:56.254 18:41:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:56.254 18:41:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:56.254 18:41:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:56.254 18:41:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:56.254 18:41:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:56.254 18:41:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:56.254 18:41:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.254 18:41:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:56.254 18:41:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.254 18:41:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.254 18:41:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.254 18:41:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:56.254 "name": "raid_bdev1", 00:10:56.254 "uuid": "e5e2bdc1-01de-46d0-9612-c7e9a485b82e", 00:10:56.254 "strip_size_kb": 0, 00:10:56.254 "state": "online", 00:10:56.254 "raid_level": "raid1", 00:10:56.254 "superblock": true, 00:10:56.254 "num_base_bdevs": 2, 00:10:56.254 "num_base_bdevs_discovered": 2, 00:10:56.254 "num_base_bdevs_operational": 2, 00:10:56.254 "base_bdevs_list": [ 00:10:56.254 { 00:10:56.254 "name": "spare", 00:10:56.254 "uuid": "572d8461-ebf7-5b5c-82af-ef6da036a1ba", 00:10:56.254 "is_configured": true, 00:10:56.254 "data_offset": 2048, 00:10:56.254 "data_size": 63488 00:10:56.254 }, 00:10:56.254 { 00:10:56.254 "name": "BaseBdev2", 00:10:56.254 "uuid": "55eb14e1-140f-5802-8cf5-d25a7e698db2", 00:10:56.254 "is_configured": true, 00:10:56.254 "data_offset": 2048, 00:10:56.254 "data_size": 63488 00:10:56.254 } 00:10:56.254 ] 00:10:56.254 }' 00:10:56.254 18:41:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:56.254 18:41:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.824 18:41:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:10:56.824 18:41:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:56.824 18:41:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:10:56.824 18:41:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:10:56.824 18:41:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:56.824 18:41:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:56.824 18:41:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.824 18:41:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.824 18:41:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.824 18:41:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.824 18:41:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:56.824 "name": "raid_bdev1", 00:10:56.824 "uuid": "e5e2bdc1-01de-46d0-9612-c7e9a485b82e", 00:10:56.824 "strip_size_kb": 0, 00:10:56.824 "state": "online", 00:10:56.824 "raid_level": "raid1", 00:10:56.824 "superblock": true, 00:10:56.824 "num_base_bdevs": 2, 00:10:56.824 "num_base_bdevs_discovered": 2, 00:10:56.824 "num_base_bdevs_operational": 2, 00:10:56.824 "base_bdevs_list": [ 00:10:56.824 { 00:10:56.824 "name": "spare", 00:10:56.824 "uuid": "572d8461-ebf7-5b5c-82af-ef6da036a1ba", 00:10:56.824 "is_configured": true, 00:10:56.824 "data_offset": 2048, 00:10:56.824 "data_size": 63488 00:10:56.824 }, 00:10:56.824 { 00:10:56.824 "name": "BaseBdev2", 00:10:56.824 "uuid": "55eb14e1-140f-5802-8cf5-d25a7e698db2", 00:10:56.824 "is_configured": true, 00:10:56.824 "data_offset": 2048, 00:10:56.824 "data_size": 63488 00:10:56.824 } 00:10:56.824 ] 00:10:56.824 }' 00:10:56.824 18:41:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:56.824 18:41:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:10:56.824 18:41:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:56.825 18:41:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:10:56.825 18:41:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.825 18:41:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:10:56.825 18:41:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.825 18:41:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.825 18:41:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.825 18:41:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:10:56.825 18:41:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:10:56.825 18:41:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.825 18:41:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.825 [2024-12-15 18:41:57.229179] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:10:56.825 18:41:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.825 18:41:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:10:56.825 18:41:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:56.825 18:41:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:56.825 18:41:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:56.825 18:41:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:56.825 18:41:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:56.825 18:41:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:56.825 18:41:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:56.825 18:41:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:56.825 18:41:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:56.825 18:41:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.825 18:41:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.825 18:41:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.825 18:41:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:56.825 18:41:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.084 18:41:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:57.084 "name": "raid_bdev1", 00:10:57.084 "uuid": "e5e2bdc1-01de-46d0-9612-c7e9a485b82e", 00:10:57.084 "strip_size_kb": 0, 00:10:57.084 "state": "online", 00:10:57.084 "raid_level": "raid1", 00:10:57.084 "superblock": true, 00:10:57.084 "num_base_bdevs": 2, 00:10:57.084 "num_base_bdevs_discovered": 1, 00:10:57.084 "num_base_bdevs_operational": 1, 00:10:57.084 "base_bdevs_list": [ 00:10:57.084 { 00:10:57.084 "name": null, 00:10:57.084 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:57.084 "is_configured": false, 00:10:57.084 "data_offset": 0, 00:10:57.084 "data_size": 63488 00:10:57.084 }, 00:10:57.084 { 00:10:57.084 "name": "BaseBdev2", 00:10:57.084 "uuid": "55eb14e1-140f-5802-8cf5-d25a7e698db2", 00:10:57.084 "is_configured": true, 00:10:57.084 "data_offset": 2048, 00:10:57.084 "data_size": 63488 00:10:57.084 } 00:10:57.084 ] 00:10:57.084 }' 00:10:57.084 18:41:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:57.084 18:41:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.344 18:41:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:10:57.344 18:41:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.344 18:41:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.344 [2024-12-15 18:41:57.716416] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:10:57.344 [2024-12-15 18:41:57.716643] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:10:57.344 [2024-12-15 18:41:57.716657] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:10:57.344 [2024-12-15 18:41:57.716695] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:10:57.344 [2024-12-15 18:41:57.721568] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1a10 00:10:57.344 18:41:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.344 18:41:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:10:57.344 [2024-12-15 18:41:57.723443] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:10:58.816 18:41:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:10:58.816 18:41:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:58.816 18:41:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:10:58.816 18:41:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:10:58.816 18:41:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:58.816 18:41:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.816 18:41:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:58.816 18:41:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.816 18:41:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.816 18:41:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.816 18:41:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:58.816 "name": "raid_bdev1", 00:10:58.816 "uuid": "e5e2bdc1-01de-46d0-9612-c7e9a485b82e", 00:10:58.816 "strip_size_kb": 0, 00:10:58.816 "state": "online", 00:10:58.816 "raid_level": "raid1", 00:10:58.816 "superblock": true, 00:10:58.816 "num_base_bdevs": 2, 00:10:58.816 "num_base_bdevs_discovered": 2, 00:10:58.816 "num_base_bdevs_operational": 2, 00:10:58.816 "process": { 00:10:58.816 "type": "rebuild", 00:10:58.816 "target": "spare", 00:10:58.816 "progress": { 00:10:58.816 "blocks": 20480, 00:10:58.816 "percent": 32 00:10:58.816 } 00:10:58.816 }, 00:10:58.816 "base_bdevs_list": [ 00:10:58.816 { 00:10:58.816 "name": "spare", 00:10:58.816 "uuid": "572d8461-ebf7-5b5c-82af-ef6da036a1ba", 00:10:58.816 "is_configured": true, 00:10:58.816 "data_offset": 2048, 00:10:58.816 "data_size": 63488 00:10:58.816 }, 00:10:58.816 { 00:10:58.816 "name": "BaseBdev2", 00:10:58.816 "uuid": "55eb14e1-140f-5802-8cf5-d25a7e698db2", 00:10:58.816 "is_configured": true, 00:10:58.816 "data_offset": 2048, 00:10:58.816 "data_size": 63488 00:10:58.816 } 00:10:58.816 ] 00:10:58.816 }' 00:10:58.816 18:41:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:58.816 18:41:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:10:58.816 18:41:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:58.816 18:41:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:10:58.816 18:41:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:10:58.816 18:41:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.816 18:41:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.816 [2024-12-15 18:41:58.879963] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:10:58.816 [2024-12-15 18:41:58.928332] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:10:58.816 [2024-12-15 18:41:58.928389] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:58.816 [2024-12-15 18:41:58.928406] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:10:58.816 [2024-12-15 18:41:58.928414] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:10:58.816 18:41:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.816 18:41:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:10:58.816 18:41:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:58.816 18:41:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:58.816 18:41:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:58.816 18:41:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:58.816 18:41:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:58.816 18:41:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:58.816 18:41:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:58.816 18:41:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:58.816 18:41:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:58.816 18:41:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.816 18:41:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.816 18:41:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:58.816 18:41:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.816 18:41:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.816 18:41:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:58.816 "name": "raid_bdev1", 00:10:58.816 "uuid": "e5e2bdc1-01de-46d0-9612-c7e9a485b82e", 00:10:58.816 "strip_size_kb": 0, 00:10:58.816 "state": "online", 00:10:58.816 "raid_level": "raid1", 00:10:58.816 "superblock": true, 00:10:58.816 "num_base_bdevs": 2, 00:10:58.816 "num_base_bdevs_discovered": 1, 00:10:58.816 "num_base_bdevs_operational": 1, 00:10:58.816 "base_bdevs_list": [ 00:10:58.816 { 00:10:58.816 "name": null, 00:10:58.816 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:58.816 "is_configured": false, 00:10:58.816 "data_offset": 0, 00:10:58.816 "data_size": 63488 00:10:58.816 }, 00:10:58.816 { 00:10:58.816 "name": "BaseBdev2", 00:10:58.816 "uuid": "55eb14e1-140f-5802-8cf5-d25a7e698db2", 00:10:58.816 "is_configured": true, 00:10:58.816 "data_offset": 2048, 00:10:58.816 "data_size": 63488 00:10:58.816 } 00:10:58.816 ] 00:10:58.816 }' 00:10:58.816 18:41:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:58.816 18:41:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.075 18:41:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:10:59.075 18:41:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.075 18:41:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.075 [2024-12-15 18:41:59.424488] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:10:59.075 [2024-12-15 18:41:59.424570] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:59.075 [2024-12-15 18:41:59.424601] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:10:59.075 [2024-12-15 18:41:59.424612] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:59.075 [2024-12-15 18:41:59.425073] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:59.075 [2024-12-15 18:41:59.425101] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:10:59.075 [2024-12-15 18:41:59.425194] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:10:59.075 [2024-12-15 18:41:59.425211] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:10:59.075 [2024-12-15 18:41:59.425224] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:10:59.075 [2024-12-15 18:41:59.425243] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:10:59.075 [2024-12-15 18:41:59.430068] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ae0 00:10:59.075 spare 00:10:59.075 18:41:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.075 18:41:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:10:59.075 [2024-12-15 18:41:59.431893] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:00.011 18:42:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:00.011 18:42:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:00.011 18:42:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:00.011 18:42:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:00.011 18:42:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:00.011 18:42:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.011 18:42:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:00.011 18:42:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.011 18:42:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.271 18:42:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.271 18:42:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:00.271 "name": "raid_bdev1", 00:11:00.271 "uuid": "e5e2bdc1-01de-46d0-9612-c7e9a485b82e", 00:11:00.271 "strip_size_kb": 0, 00:11:00.271 "state": "online", 00:11:00.271 "raid_level": "raid1", 00:11:00.271 "superblock": true, 00:11:00.271 "num_base_bdevs": 2, 00:11:00.271 "num_base_bdevs_discovered": 2, 00:11:00.271 "num_base_bdevs_operational": 2, 00:11:00.271 "process": { 00:11:00.271 "type": "rebuild", 00:11:00.271 "target": "spare", 00:11:00.271 "progress": { 00:11:00.271 "blocks": 20480, 00:11:00.271 "percent": 32 00:11:00.271 } 00:11:00.271 }, 00:11:00.271 "base_bdevs_list": [ 00:11:00.271 { 00:11:00.271 "name": "spare", 00:11:00.271 "uuid": "572d8461-ebf7-5b5c-82af-ef6da036a1ba", 00:11:00.271 "is_configured": true, 00:11:00.271 "data_offset": 2048, 00:11:00.271 "data_size": 63488 00:11:00.271 }, 00:11:00.271 { 00:11:00.271 "name": "BaseBdev2", 00:11:00.271 "uuid": "55eb14e1-140f-5802-8cf5-d25a7e698db2", 00:11:00.271 "is_configured": true, 00:11:00.271 "data_offset": 2048, 00:11:00.271 "data_size": 63488 00:11:00.271 } 00:11:00.271 ] 00:11:00.271 }' 00:11:00.271 18:42:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:00.271 18:42:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:00.271 18:42:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:00.271 18:42:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:00.271 18:42:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:11:00.271 18:42:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.271 18:42:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.271 [2024-12-15 18:42:00.572502] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:00.271 [2024-12-15 18:42:00.636288] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:11:00.271 [2024-12-15 18:42:00.636346] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:00.271 [2024-12-15 18:42:00.636361] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:00.271 [2024-12-15 18:42:00.636370] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:11:00.271 18:42:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.271 18:42:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:00.271 18:42:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:00.271 18:42:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:00.271 18:42:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:00.271 18:42:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:00.271 18:42:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:00.271 18:42:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:00.271 18:42:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:00.271 18:42:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:00.271 18:42:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:00.271 18:42:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.271 18:42:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:00.271 18:42:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.271 18:42:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.271 18:42:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.271 18:42:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:00.271 "name": "raid_bdev1", 00:11:00.271 "uuid": "e5e2bdc1-01de-46d0-9612-c7e9a485b82e", 00:11:00.271 "strip_size_kb": 0, 00:11:00.271 "state": "online", 00:11:00.271 "raid_level": "raid1", 00:11:00.271 "superblock": true, 00:11:00.271 "num_base_bdevs": 2, 00:11:00.271 "num_base_bdevs_discovered": 1, 00:11:00.271 "num_base_bdevs_operational": 1, 00:11:00.271 "base_bdevs_list": [ 00:11:00.271 { 00:11:00.271 "name": null, 00:11:00.271 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:00.271 "is_configured": false, 00:11:00.271 "data_offset": 0, 00:11:00.271 "data_size": 63488 00:11:00.271 }, 00:11:00.271 { 00:11:00.271 "name": "BaseBdev2", 00:11:00.271 "uuid": "55eb14e1-140f-5802-8cf5-d25a7e698db2", 00:11:00.271 "is_configured": true, 00:11:00.271 "data_offset": 2048, 00:11:00.271 "data_size": 63488 00:11:00.271 } 00:11:00.271 ] 00:11:00.271 }' 00:11:00.271 18:42:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:00.271 18:42:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.838 18:42:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:00.839 18:42:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:00.839 18:42:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:00.839 18:42:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:00.839 18:42:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:00.839 18:42:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:00.839 18:42:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.839 18:42:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.839 18:42:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.839 18:42:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.839 18:42:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:00.839 "name": "raid_bdev1", 00:11:00.839 "uuid": "e5e2bdc1-01de-46d0-9612-c7e9a485b82e", 00:11:00.839 "strip_size_kb": 0, 00:11:00.839 "state": "online", 00:11:00.839 "raid_level": "raid1", 00:11:00.839 "superblock": true, 00:11:00.839 "num_base_bdevs": 2, 00:11:00.839 "num_base_bdevs_discovered": 1, 00:11:00.839 "num_base_bdevs_operational": 1, 00:11:00.839 "base_bdevs_list": [ 00:11:00.839 { 00:11:00.839 "name": null, 00:11:00.839 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:00.839 "is_configured": false, 00:11:00.839 "data_offset": 0, 00:11:00.839 "data_size": 63488 00:11:00.839 }, 00:11:00.839 { 00:11:00.839 "name": "BaseBdev2", 00:11:00.839 "uuid": "55eb14e1-140f-5802-8cf5-d25a7e698db2", 00:11:00.839 "is_configured": true, 00:11:00.839 "data_offset": 2048, 00:11:00.839 "data_size": 63488 00:11:00.839 } 00:11:00.839 ] 00:11:00.839 }' 00:11:00.839 18:42:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:00.839 18:42:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:00.839 18:42:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:00.839 18:42:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:00.839 18:42:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:11:00.839 18:42:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.839 18:42:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.839 18:42:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.839 18:42:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:11:00.839 18:42:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.839 18:42:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.839 [2024-12-15 18:42:01.236169] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:11:00.839 [2024-12-15 18:42:01.236229] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:00.839 [2024-12-15 18:42:01.236265] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:11:00.839 [2024-12-15 18:42:01.236276] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:00.839 [2024-12-15 18:42:01.236683] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:00.839 [2024-12-15 18:42:01.236704] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:00.839 [2024-12-15 18:42:01.236775] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:11:00.839 [2024-12-15 18:42:01.236796] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:11:00.839 [2024-12-15 18:42:01.236823] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:11:00.839 [2024-12-15 18:42:01.236836] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:11:00.839 BaseBdev1 00:11:00.839 18:42:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.839 18:42:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:11:02.217 18:42:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:02.217 18:42:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:02.217 18:42:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:02.217 18:42:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:02.217 18:42:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:02.217 18:42:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:02.217 18:42:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:02.217 18:42:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:02.217 18:42:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:02.217 18:42:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:02.217 18:42:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:02.217 18:42:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:02.217 18:42:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.217 18:42:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.217 18:42:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.217 18:42:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:02.217 "name": "raid_bdev1", 00:11:02.217 "uuid": "e5e2bdc1-01de-46d0-9612-c7e9a485b82e", 00:11:02.217 "strip_size_kb": 0, 00:11:02.217 "state": "online", 00:11:02.217 "raid_level": "raid1", 00:11:02.217 "superblock": true, 00:11:02.217 "num_base_bdevs": 2, 00:11:02.217 "num_base_bdevs_discovered": 1, 00:11:02.217 "num_base_bdevs_operational": 1, 00:11:02.217 "base_bdevs_list": [ 00:11:02.217 { 00:11:02.217 "name": null, 00:11:02.217 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:02.217 "is_configured": false, 00:11:02.217 "data_offset": 0, 00:11:02.217 "data_size": 63488 00:11:02.217 }, 00:11:02.217 { 00:11:02.217 "name": "BaseBdev2", 00:11:02.217 "uuid": "55eb14e1-140f-5802-8cf5-d25a7e698db2", 00:11:02.217 "is_configured": true, 00:11:02.217 "data_offset": 2048, 00:11:02.217 "data_size": 63488 00:11:02.217 } 00:11:02.217 ] 00:11:02.217 }' 00:11:02.217 18:42:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:02.217 18:42:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.476 18:42:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:02.476 18:42:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:02.476 18:42:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:02.476 18:42:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:02.476 18:42:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:02.476 18:42:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:02.476 18:42:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:02.476 18:42:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.476 18:42:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.476 18:42:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.476 18:42:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:02.476 "name": "raid_bdev1", 00:11:02.476 "uuid": "e5e2bdc1-01de-46d0-9612-c7e9a485b82e", 00:11:02.476 "strip_size_kb": 0, 00:11:02.476 "state": "online", 00:11:02.476 "raid_level": "raid1", 00:11:02.476 "superblock": true, 00:11:02.476 "num_base_bdevs": 2, 00:11:02.476 "num_base_bdevs_discovered": 1, 00:11:02.477 "num_base_bdevs_operational": 1, 00:11:02.477 "base_bdevs_list": [ 00:11:02.477 { 00:11:02.477 "name": null, 00:11:02.477 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:02.477 "is_configured": false, 00:11:02.477 "data_offset": 0, 00:11:02.477 "data_size": 63488 00:11:02.477 }, 00:11:02.477 { 00:11:02.477 "name": "BaseBdev2", 00:11:02.477 "uuid": "55eb14e1-140f-5802-8cf5-d25a7e698db2", 00:11:02.477 "is_configured": true, 00:11:02.477 "data_offset": 2048, 00:11:02.477 "data_size": 63488 00:11:02.477 } 00:11:02.477 ] 00:11:02.477 }' 00:11:02.477 18:42:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:02.477 18:42:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:02.477 18:42:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:02.477 18:42:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:02.477 18:42:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:11:02.477 18:42:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:11:02.477 18:42:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:11:02.477 18:42:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:11:02.477 18:42:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:02.477 18:42:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:11:02.477 18:42:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:02.477 18:42:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:11:02.477 18:42:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.477 18:42:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.477 [2024-12-15 18:42:02.857495] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:02.477 [2024-12-15 18:42:02.857663] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:11:02.477 [2024-12-15 18:42:02.857676] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:11:02.477 request: 00:11:02.477 { 00:11:02.477 "base_bdev": "BaseBdev1", 00:11:02.477 "raid_bdev": "raid_bdev1", 00:11:02.477 "method": "bdev_raid_add_base_bdev", 00:11:02.477 "req_id": 1 00:11:02.477 } 00:11:02.477 Got JSON-RPC error response 00:11:02.477 response: 00:11:02.477 { 00:11:02.477 "code": -22, 00:11:02.477 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:11:02.477 } 00:11:02.477 18:42:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:11:02.477 18:42:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:11:02.477 18:42:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:02.477 18:42:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:02.477 18:42:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:02.477 18:42:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:11:03.856 18:42:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:03.856 18:42:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:03.856 18:42:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:03.856 18:42:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:03.856 18:42:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:03.856 18:42:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:03.856 18:42:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:03.856 18:42:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:03.856 18:42:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:03.856 18:42:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:03.856 18:42:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:03.856 18:42:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.856 18:42:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.856 18:42:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:03.856 18:42:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.856 18:42:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:03.856 "name": "raid_bdev1", 00:11:03.856 "uuid": "e5e2bdc1-01de-46d0-9612-c7e9a485b82e", 00:11:03.856 "strip_size_kb": 0, 00:11:03.856 "state": "online", 00:11:03.856 "raid_level": "raid1", 00:11:03.856 "superblock": true, 00:11:03.856 "num_base_bdevs": 2, 00:11:03.856 "num_base_bdevs_discovered": 1, 00:11:03.856 "num_base_bdevs_operational": 1, 00:11:03.856 "base_bdevs_list": [ 00:11:03.856 { 00:11:03.856 "name": null, 00:11:03.856 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:03.856 "is_configured": false, 00:11:03.856 "data_offset": 0, 00:11:03.856 "data_size": 63488 00:11:03.856 }, 00:11:03.856 { 00:11:03.856 "name": "BaseBdev2", 00:11:03.856 "uuid": "55eb14e1-140f-5802-8cf5-d25a7e698db2", 00:11:03.856 "is_configured": true, 00:11:03.856 "data_offset": 2048, 00:11:03.856 "data_size": 63488 00:11:03.856 } 00:11:03.856 ] 00:11:03.856 }' 00:11:03.856 18:42:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:03.856 18:42:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.116 18:42:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:04.116 18:42:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:04.116 18:42:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:04.116 18:42:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:04.116 18:42:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:04.116 18:42:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:04.116 18:42:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:04.116 18:42:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.116 18:42:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.116 18:42:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.116 18:42:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:04.116 "name": "raid_bdev1", 00:11:04.116 "uuid": "e5e2bdc1-01de-46d0-9612-c7e9a485b82e", 00:11:04.116 "strip_size_kb": 0, 00:11:04.116 "state": "online", 00:11:04.117 "raid_level": "raid1", 00:11:04.117 "superblock": true, 00:11:04.117 "num_base_bdevs": 2, 00:11:04.117 "num_base_bdevs_discovered": 1, 00:11:04.117 "num_base_bdevs_operational": 1, 00:11:04.117 "base_bdevs_list": [ 00:11:04.117 { 00:11:04.117 "name": null, 00:11:04.117 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:04.117 "is_configured": false, 00:11:04.117 "data_offset": 0, 00:11:04.117 "data_size": 63488 00:11:04.117 }, 00:11:04.117 { 00:11:04.117 "name": "BaseBdev2", 00:11:04.117 "uuid": "55eb14e1-140f-5802-8cf5-d25a7e698db2", 00:11:04.117 "is_configured": true, 00:11:04.117 "data_offset": 2048, 00:11:04.117 "data_size": 63488 00:11:04.117 } 00:11:04.117 ] 00:11:04.117 }' 00:11:04.117 18:42:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:04.117 18:42:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:04.117 18:42:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:04.117 18:42:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:04.117 18:42:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 88339 00:11:04.117 18:42:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 88339 ']' 00:11:04.117 18:42:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 88339 00:11:04.117 18:42:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:11:04.117 18:42:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:04.117 18:42:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88339 00:11:04.117 18:42:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:04.117 18:42:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:04.117 killing process with pid 88339 00:11:04.117 18:42:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88339' 00:11:04.117 18:42:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 88339 00:11:04.117 Received shutdown signal, test time was about 60.000000 seconds 00:11:04.117 00:11:04.117 Latency(us) 00:11:04.117 [2024-12-15T18:42:04.558Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:04.117 [2024-12-15T18:42:04.558Z] =================================================================================================================== 00:11:04.117 [2024-12-15T18:42:04.558Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:11:04.117 [2024-12-15 18:42:04.505757] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:04.117 [2024-12-15 18:42:04.505893] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:04.117 18:42:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 88339 00:11:04.117 [2024-12-15 18:42:04.505951] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:04.117 [2024-12-15 18:42:04.505962] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state offline 00:11:04.117 [2024-12-15 18:42:04.538459] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:04.377 18:42:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:11:04.377 00:11:04.377 real 0m21.621s 00:11:04.377 user 0m27.186s 00:11:04.377 sys 0m3.549s 00:11:04.377 18:42:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:04.377 18:42:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.377 ************************************ 00:11:04.377 END TEST raid_rebuild_test_sb 00:11:04.377 ************************************ 00:11:04.377 18:42:04 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:11:04.377 18:42:04 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:11:04.377 18:42:04 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:04.377 18:42:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:04.637 ************************************ 00:11:04.637 START TEST raid_rebuild_test_io 00:11:04.637 ************************************ 00:11:04.637 18:42:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false true true 00:11:04.637 18:42:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:11:04.637 18:42:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:11:04.637 18:42:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:11:04.637 18:42:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:11:04.637 18:42:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:11:04.637 18:42:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:11:04.637 18:42:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:04.637 18:42:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:11:04.637 18:42:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:04.637 18:42:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:04.637 18:42:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:11:04.637 18:42:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:04.637 18:42:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:04.637 18:42:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:04.637 18:42:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:11:04.637 18:42:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:11:04.637 18:42:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:11:04.637 18:42:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:11:04.637 18:42:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:11:04.637 18:42:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:11:04.637 18:42:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:11:04.637 18:42:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:11:04.637 18:42:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:11:04.637 18:42:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=89054 00:11:04.637 18:42:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:11:04.637 18:42:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 89054 00:11:04.637 18:42:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 89054 ']' 00:11:04.637 18:42:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:04.637 18:42:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:04.637 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:04.637 18:42:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:04.637 18:42:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:04.637 18:42:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:04.637 I/O size of 3145728 is greater than zero copy threshold (65536). 00:11:04.637 Zero copy mechanism will not be used. 00:11:04.637 [2024-12-15 18:42:04.923348] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:11:04.637 [2024-12-15 18:42:04.923479] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89054 ] 00:11:04.902 [2024-12-15 18:42:05.086190] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:04.902 [2024-12-15 18:42:05.113105] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:11:04.902 [2024-12-15 18:42:05.156324] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:04.902 [2024-12-15 18:42:05.156364] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:05.548 18:42:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:05.548 18:42:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:11:05.548 18:42:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:05.548 18:42:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:05.548 18:42:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.548 18:42:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:05.548 BaseBdev1_malloc 00:11:05.548 18:42:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.548 18:42:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:11:05.548 18:42:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.548 18:42:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:05.548 [2024-12-15 18:42:05.780125] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:11:05.548 [2024-12-15 18:42:05.780206] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:05.548 [2024-12-15 18:42:05.780240] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:05.548 [2024-12-15 18:42:05.780254] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:05.548 [2024-12-15 18:42:05.782307] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:05.548 [2024-12-15 18:42:05.782339] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:05.548 BaseBdev1 00:11:05.548 18:42:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.548 18:42:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:05.548 18:42:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:05.548 18:42:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.548 18:42:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:05.548 BaseBdev2_malloc 00:11:05.548 18:42:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.548 18:42:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:11:05.548 18:42:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.548 18:42:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:05.548 [2024-12-15 18:42:05.808706] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:11:05.548 [2024-12-15 18:42:05.808756] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:05.548 [2024-12-15 18:42:05.808776] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:05.548 [2024-12-15 18:42:05.808784] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:05.548 [2024-12-15 18:42:05.810755] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:05.548 [2024-12-15 18:42:05.810786] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:05.548 BaseBdev2 00:11:05.548 18:42:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.548 18:42:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:11:05.548 18:42:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.548 18:42:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:05.548 spare_malloc 00:11:05.548 18:42:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.548 18:42:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:11:05.548 18:42:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.548 18:42:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:05.548 spare_delay 00:11:05.548 18:42:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.548 18:42:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:05.548 18:42:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.548 18:42:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:05.548 [2024-12-15 18:42:05.849309] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:05.548 [2024-12-15 18:42:05.849362] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:05.549 [2024-12-15 18:42:05.849383] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:11:05.549 [2024-12-15 18:42:05.849391] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:05.549 [2024-12-15 18:42:05.851427] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:05.549 [2024-12-15 18:42:05.851456] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:05.549 spare 00:11:05.549 18:42:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.549 18:42:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:11:05.549 18:42:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.549 18:42:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:05.549 [2024-12-15 18:42:05.861325] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:05.549 [2024-12-15 18:42:05.863408] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:05.549 [2024-12-15 18:42:05.863500] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:11:05.549 [2024-12-15 18:42:05.863516] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:05.549 [2024-12-15 18:42:05.863765] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:11:05.549 [2024-12-15 18:42:05.863929] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:11:05.549 [2024-12-15 18:42:05.863946] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:11:05.549 [2024-12-15 18:42:05.864072] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:05.549 18:42:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.549 18:42:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:05.549 18:42:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:05.549 18:42:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:05.549 18:42:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:05.549 18:42:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:05.549 18:42:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:05.549 18:42:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:05.549 18:42:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:05.549 18:42:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:05.549 18:42:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:05.549 18:42:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.549 18:42:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:05.549 18:42:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.549 18:42:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:05.549 18:42:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.549 18:42:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:05.549 "name": "raid_bdev1", 00:11:05.549 "uuid": "920d9b36-9498-48f9-a56c-edae15c1d262", 00:11:05.549 "strip_size_kb": 0, 00:11:05.549 "state": "online", 00:11:05.549 "raid_level": "raid1", 00:11:05.549 "superblock": false, 00:11:05.549 "num_base_bdevs": 2, 00:11:05.549 "num_base_bdevs_discovered": 2, 00:11:05.549 "num_base_bdevs_operational": 2, 00:11:05.549 "base_bdevs_list": [ 00:11:05.549 { 00:11:05.549 "name": "BaseBdev1", 00:11:05.549 "uuid": "fd148716-64c3-5689-82ef-a1b9ca405035", 00:11:05.549 "is_configured": true, 00:11:05.549 "data_offset": 0, 00:11:05.549 "data_size": 65536 00:11:05.549 }, 00:11:05.549 { 00:11:05.549 "name": "BaseBdev2", 00:11:05.549 "uuid": "53276aa4-7b2e-5525-a6dc-e61142c54508", 00:11:05.549 "is_configured": true, 00:11:05.549 "data_offset": 0, 00:11:05.549 "data_size": 65536 00:11:05.549 } 00:11:05.549 ] 00:11:05.549 }' 00:11:05.549 18:42:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:05.549 18:42:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:06.117 18:42:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:06.117 18:42:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.117 18:42:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:06.117 18:42:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:11:06.118 [2024-12-15 18:42:06.284935] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:06.118 18:42:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.118 18:42:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:11:06.118 18:42:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:06.118 18:42:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.118 18:42:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:06.118 18:42:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:11:06.118 18:42:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.118 18:42:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:11:06.118 18:42:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:11:06.118 18:42:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:11:06.118 18:42:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:06.118 18:42:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.118 18:42:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:06.118 [2024-12-15 18:42:06.384499] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:06.118 18:42:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.118 18:42:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:06.118 18:42:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:06.118 18:42:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:06.118 18:42:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:06.118 18:42:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:06.118 18:42:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:06.118 18:42:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:06.118 18:42:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:06.118 18:42:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:06.118 18:42:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:06.118 18:42:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:06.118 18:42:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:06.118 18:42:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.118 18:42:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:06.118 18:42:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.118 18:42:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:06.118 "name": "raid_bdev1", 00:11:06.118 "uuid": "920d9b36-9498-48f9-a56c-edae15c1d262", 00:11:06.118 "strip_size_kb": 0, 00:11:06.118 "state": "online", 00:11:06.118 "raid_level": "raid1", 00:11:06.118 "superblock": false, 00:11:06.118 "num_base_bdevs": 2, 00:11:06.118 "num_base_bdevs_discovered": 1, 00:11:06.118 "num_base_bdevs_operational": 1, 00:11:06.118 "base_bdevs_list": [ 00:11:06.118 { 00:11:06.118 "name": null, 00:11:06.118 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:06.118 "is_configured": false, 00:11:06.118 "data_offset": 0, 00:11:06.118 "data_size": 65536 00:11:06.118 }, 00:11:06.118 { 00:11:06.118 "name": "BaseBdev2", 00:11:06.118 "uuid": "53276aa4-7b2e-5525-a6dc-e61142c54508", 00:11:06.118 "is_configured": true, 00:11:06.118 "data_offset": 0, 00:11:06.118 "data_size": 65536 00:11:06.118 } 00:11:06.118 ] 00:11:06.118 }' 00:11:06.118 18:42:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:06.118 18:42:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:06.118 [2024-12-15 18:42:06.483047] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:06.118 I/O size of 3145728 is greater than zero copy threshold (65536). 00:11:06.118 Zero copy mechanism will not be used. 00:11:06.118 Running I/O for 60 seconds... 00:11:06.378 18:42:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:06.378 18:42:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.378 18:42:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:06.378 [2024-12-15 18:42:06.806225] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:06.637 18:42:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.637 18:42:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:11:06.637 [2024-12-15 18:42:06.856655] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:06.637 [2024-12-15 18:42:06.858578] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:06.637 [2024-12-15 18:42:06.977623] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:11:06.638 [2024-12-15 18:42:06.978139] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:11:06.897 [2024-12-15 18:42:07.197502] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:11:06.897 [2024-12-15 18:42:07.197828] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:11:07.415 144.00 IOPS, 432.00 MiB/s [2024-12-15T18:42:07.856Z] [2024-12-15 18:42:07.641318] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:11:07.415 [2024-12-15 18:42:07.641583] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:11:07.415 18:42:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:07.415 18:42:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:07.415 18:42:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:07.415 18:42:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:07.415 18:42:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:07.415 18:42:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:07.415 18:42:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:07.415 18:42:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.415 18:42:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:07.674 18:42:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.674 18:42:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:07.674 "name": "raid_bdev1", 00:11:07.674 "uuid": "920d9b36-9498-48f9-a56c-edae15c1d262", 00:11:07.674 "strip_size_kb": 0, 00:11:07.674 "state": "online", 00:11:07.674 "raid_level": "raid1", 00:11:07.674 "superblock": false, 00:11:07.674 "num_base_bdevs": 2, 00:11:07.674 "num_base_bdevs_discovered": 2, 00:11:07.674 "num_base_bdevs_operational": 2, 00:11:07.674 "process": { 00:11:07.674 "type": "rebuild", 00:11:07.674 "target": "spare", 00:11:07.674 "progress": { 00:11:07.674 "blocks": 10240, 00:11:07.674 "percent": 15 00:11:07.674 } 00:11:07.674 }, 00:11:07.674 "base_bdevs_list": [ 00:11:07.674 { 00:11:07.674 "name": "spare", 00:11:07.674 "uuid": "4d4ca5ba-f4f7-54e2-ab4a-083e89a044ed", 00:11:07.674 "is_configured": true, 00:11:07.674 "data_offset": 0, 00:11:07.674 "data_size": 65536 00:11:07.674 }, 00:11:07.674 { 00:11:07.674 "name": "BaseBdev2", 00:11:07.674 "uuid": "53276aa4-7b2e-5525-a6dc-e61142c54508", 00:11:07.674 "is_configured": true, 00:11:07.674 "data_offset": 0, 00:11:07.674 "data_size": 65536 00:11:07.674 } 00:11:07.674 ] 00:11:07.674 }' 00:11:07.674 18:42:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:07.674 18:42:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:07.674 18:42:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:07.674 [2024-12-15 18:42:07.976973] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:11:07.674 18:42:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:07.674 18:42:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:11:07.674 18:42:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.674 18:42:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:07.674 [2024-12-15 18:42:07.998263] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:07.934 [2024-12-15 18:42:08.137473] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:11:07.934 [2024-12-15 18:42:08.139845] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:07.934 [2024-12-15 18:42:08.139891] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:07.934 [2024-12-15 18:42:08.139906] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:11:07.934 [2024-12-15 18:42:08.167992] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005ee0 00:11:07.934 18:42:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.934 18:42:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:07.934 18:42:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:07.934 18:42:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:07.934 18:42:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:07.934 18:42:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:07.935 18:42:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:07.935 18:42:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:07.935 18:42:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:07.935 18:42:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:07.935 18:42:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:07.935 18:42:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:07.935 18:42:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.935 18:42:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:07.935 18:42:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:07.935 18:42:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.935 18:42:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:07.935 "name": "raid_bdev1", 00:11:07.935 "uuid": "920d9b36-9498-48f9-a56c-edae15c1d262", 00:11:07.935 "strip_size_kb": 0, 00:11:07.935 "state": "online", 00:11:07.935 "raid_level": "raid1", 00:11:07.935 "superblock": false, 00:11:07.935 "num_base_bdevs": 2, 00:11:07.935 "num_base_bdevs_discovered": 1, 00:11:07.935 "num_base_bdevs_operational": 1, 00:11:07.935 "base_bdevs_list": [ 00:11:07.935 { 00:11:07.935 "name": null, 00:11:07.935 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:07.935 "is_configured": false, 00:11:07.935 "data_offset": 0, 00:11:07.935 "data_size": 65536 00:11:07.935 }, 00:11:07.935 { 00:11:07.935 "name": "BaseBdev2", 00:11:07.935 "uuid": "53276aa4-7b2e-5525-a6dc-e61142c54508", 00:11:07.935 "is_configured": true, 00:11:07.935 "data_offset": 0, 00:11:07.935 "data_size": 65536 00:11:07.935 } 00:11:07.935 ] 00:11:07.935 }' 00:11:07.935 18:42:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:07.935 18:42:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:08.194 153.50 IOPS, 460.50 MiB/s [2024-12-15T18:42:08.635Z] 18:42:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:08.194 18:42:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:08.194 18:42:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:08.194 18:42:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:08.194 18:42:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:08.194 18:42:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:08.194 18:42:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.194 18:42:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.194 18:42:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:08.453 18:42:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.453 18:42:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:08.453 "name": "raid_bdev1", 00:11:08.453 "uuid": "920d9b36-9498-48f9-a56c-edae15c1d262", 00:11:08.453 "strip_size_kb": 0, 00:11:08.453 "state": "online", 00:11:08.453 "raid_level": "raid1", 00:11:08.453 "superblock": false, 00:11:08.453 "num_base_bdevs": 2, 00:11:08.453 "num_base_bdevs_discovered": 1, 00:11:08.453 "num_base_bdevs_operational": 1, 00:11:08.453 "base_bdevs_list": [ 00:11:08.453 { 00:11:08.453 "name": null, 00:11:08.453 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:08.453 "is_configured": false, 00:11:08.453 "data_offset": 0, 00:11:08.453 "data_size": 65536 00:11:08.453 }, 00:11:08.453 { 00:11:08.453 "name": "BaseBdev2", 00:11:08.453 "uuid": "53276aa4-7b2e-5525-a6dc-e61142c54508", 00:11:08.453 "is_configured": true, 00:11:08.453 "data_offset": 0, 00:11:08.453 "data_size": 65536 00:11:08.453 } 00:11:08.453 ] 00:11:08.453 }' 00:11:08.453 18:42:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:08.453 18:42:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:08.453 18:42:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:08.453 18:42:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:08.453 18:42:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:08.453 18:42:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.453 18:42:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:08.453 [2024-12-15 18:42:08.743881] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:08.453 18:42:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.453 18:42:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:11:08.453 [2024-12-15 18:42:08.782073] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:11:08.453 [2024-12-15 18:42:08.784006] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:08.712 [2024-12-15 18:42:08.903175] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:11:08.712 [2024-12-15 18:42:08.903745] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:11:08.712 [2024-12-15 18:42:09.116898] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:11:08.712 [2024-12-15 18:42:09.117221] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:11:09.281 [2024-12-15 18:42:09.446955] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:11:09.541 164.00 IOPS, 492.00 MiB/s [2024-12-15T18:42:09.982Z] 18:42:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:09.541 18:42:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:09.541 18:42:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:09.541 18:42:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:09.541 18:42:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:09.541 18:42:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.541 18:42:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.541 18:42:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:09.541 18:42:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:09.541 18:42:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.541 18:42:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:09.541 "name": "raid_bdev1", 00:11:09.542 "uuid": "920d9b36-9498-48f9-a56c-edae15c1d262", 00:11:09.542 "strip_size_kb": 0, 00:11:09.542 "state": "online", 00:11:09.542 "raid_level": "raid1", 00:11:09.542 "superblock": false, 00:11:09.542 "num_base_bdevs": 2, 00:11:09.542 "num_base_bdevs_discovered": 2, 00:11:09.542 "num_base_bdevs_operational": 2, 00:11:09.542 "process": { 00:11:09.542 "type": "rebuild", 00:11:09.542 "target": "spare", 00:11:09.542 "progress": { 00:11:09.542 "blocks": 14336, 00:11:09.542 "percent": 21 00:11:09.542 } 00:11:09.542 }, 00:11:09.542 "base_bdevs_list": [ 00:11:09.542 { 00:11:09.542 "name": "spare", 00:11:09.542 "uuid": "4d4ca5ba-f4f7-54e2-ab4a-083e89a044ed", 00:11:09.542 "is_configured": true, 00:11:09.542 "data_offset": 0, 00:11:09.542 "data_size": 65536 00:11:09.542 }, 00:11:09.542 { 00:11:09.542 "name": "BaseBdev2", 00:11:09.542 "uuid": "53276aa4-7b2e-5525-a6dc-e61142c54508", 00:11:09.542 "is_configured": true, 00:11:09.542 "data_offset": 0, 00:11:09.542 "data_size": 65536 00:11:09.542 } 00:11:09.542 ] 00:11:09.542 }' 00:11:09.542 18:42:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:09.542 18:42:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:09.542 [2024-12-15 18:42:09.885162] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:11:09.542 [2024-12-15 18:42:09.885484] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:11:09.542 18:42:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:09.542 18:42:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:09.542 18:42:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:11:09.542 18:42:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:11:09.542 18:42:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:11:09.542 18:42:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:11:09.542 18:42:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=326 00:11:09.542 18:42:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:09.542 18:42:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:09.542 18:42:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:09.542 18:42:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:09.542 18:42:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:09.542 18:42:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:09.542 18:42:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:09.542 18:42:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.542 18:42:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.542 18:42:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:09.542 18:42:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.542 18:42:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:09.542 "name": "raid_bdev1", 00:11:09.542 "uuid": "920d9b36-9498-48f9-a56c-edae15c1d262", 00:11:09.542 "strip_size_kb": 0, 00:11:09.542 "state": "online", 00:11:09.542 "raid_level": "raid1", 00:11:09.542 "superblock": false, 00:11:09.542 "num_base_bdevs": 2, 00:11:09.542 "num_base_bdevs_discovered": 2, 00:11:09.542 "num_base_bdevs_operational": 2, 00:11:09.542 "process": { 00:11:09.542 "type": "rebuild", 00:11:09.542 "target": "spare", 00:11:09.542 "progress": { 00:11:09.542 "blocks": 16384, 00:11:09.542 "percent": 25 00:11:09.542 } 00:11:09.542 }, 00:11:09.542 "base_bdevs_list": [ 00:11:09.542 { 00:11:09.542 "name": "spare", 00:11:09.542 "uuid": "4d4ca5ba-f4f7-54e2-ab4a-083e89a044ed", 00:11:09.542 "is_configured": true, 00:11:09.542 "data_offset": 0, 00:11:09.542 "data_size": 65536 00:11:09.542 }, 00:11:09.542 { 00:11:09.542 "name": "BaseBdev2", 00:11:09.542 "uuid": "53276aa4-7b2e-5525-a6dc-e61142c54508", 00:11:09.542 "is_configured": true, 00:11:09.542 "data_offset": 0, 00:11:09.542 "data_size": 65536 00:11:09.542 } 00:11:09.542 ] 00:11:09.542 }' 00:11:09.542 18:42:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:09.801 18:42:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:09.801 18:42:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:09.801 18:42:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:09.801 18:42:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:09.801 [2024-12-15 18:42:10.116281] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:11:10.069 [2024-12-15 18:42:10.281324] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:11:10.342 145.75 IOPS, 437.25 MiB/s [2024-12-15T18:42:10.783Z] [2024-12-15 18:42:10.698525] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:11:10.601 [2024-12-15 18:42:10.907648] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:11:10.861 18:42:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:10.861 18:42:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:10.861 18:42:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:10.861 18:42:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:10.861 18:42:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:10.861 18:42:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:10.861 18:42:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.861 18:42:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:10.861 18:42:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.861 18:42:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:10.861 18:42:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.861 18:42:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:10.861 "name": "raid_bdev1", 00:11:10.861 "uuid": "920d9b36-9498-48f9-a56c-edae15c1d262", 00:11:10.861 "strip_size_kb": 0, 00:11:10.861 "state": "online", 00:11:10.861 "raid_level": "raid1", 00:11:10.861 "superblock": false, 00:11:10.861 "num_base_bdevs": 2, 00:11:10.861 "num_base_bdevs_discovered": 2, 00:11:10.861 "num_base_bdevs_operational": 2, 00:11:10.861 "process": { 00:11:10.861 "type": "rebuild", 00:11:10.861 "target": "spare", 00:11:10.861 "progress": { 00:11:10.861 "blocks": 32768, 00:11:10.861 "percent": 50 00:11:10.861 } 00:11:10.861 }, 00:11:10.861 "base_bdevs_list": [ 00:11:10.861 { 00:11:10.861 "name": "spare", 00:11:10.861 "uuid": "4d4ca5ba-f4f7-54e2-ab4a-083e89a044ed", 00:11:10.861 "is_configured": true, 00:11:10.861 "data_offset": 0, 00:11:10.861 "data_size": 65536 00:11:10.861 }, 00:11:10.861 { 00:11:10.861 "name": "BaseBdev2", 00:11:10.861 "uuid": "53276aa4-7b2e-5525-a6dc-e61142c54508", 00:11:10.861 "is_configured": true, 00:11:10.861 "data_offset": 0, 00:11:10.861 "data_size": 65536 00:11:10.861 } 00:11:10.861 ] 00:11:10.861 }' 00:11:10.862 18:42:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:10.862 18:42:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:10.862 18:42:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:10.862 18:42:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:10.862 18:42:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:11.121 [2024-12-15 18:42:11.330906] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:11:11.121 [2024-12-15 18:42:11.444752] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:11:11.689 129.80 IOPS, 389.40 MiB/s [2024-12-15T18:42:12.130Z] [2024-12-15 18:42:12.093768] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:11:11.948 18:42:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:11.948 18:42:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:11.948 18:42:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:11.948 18:42:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:11.948 18:42:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:11.948 18:42:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:11.948 18:42:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:11.948 18:42:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.948 18:42:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:11.948 18:42:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:11.948 18:42:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.948 18:42:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:11.948 "name": "raid_bdev1", 00:11:11.948 "uuid": "920d9b36-9498-48f9-a56c-edae15c1d262", 00:11:11.948 "strip_size_kb": 0, 00:11:11.948 "state": "online", 00:11:11.948 "raid_level": "raid1", 00:11:11.948 "superblock": false, 00:11:11.948 "num_base_bdevs": 2, 00:11:11.948 "num_base_bdevs_discovered": 2, 00:11:11.948 "num_base_bdevs_operational": 2, 00:11:11.948 "process": { 00:11:11.948 "type": "rebuild", 00:11:11.948 "target": "spare", 00:11:11.948 "progress": { 00:11:11.948 "blocks": 53248, 00:11:11.948 "percent": 81 00:11:11.948 } 00:11:11.948 }, 00:11:11.948 "base_bdevs_list": [ 00:11:11.948 { 00:11:11.948 "name": "spare", 00:11:11.948 "uuid": "4d4ca5ba-f4f7-54e2-ab4a-083e89a044ed", 00:11:11.948 "is_configured": true, 00:11:11.948 "data_offset": 0, 00:11:11.948 "data_size": 65536 00:11:11.948 }, 00:11:11.948 { 00:11:11.948 "name": "BaseBdev2", 00:11:11.948 "uuid": "53276aa4-7b2e-5525-a6dc-e61142c54508", 00:11:11.948 "is_configured": true, 00:11:11.948 "data_offset": 0, 00:11:11.948 "data_size": 65536 00:11:11.948 } 00:11:11.948 ] 00:11:11.948 }' 00:11:11.948 18:42:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:11.948 18:42:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:11.948 18:42:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:11.948 18:42:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:11.948 18:42:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:11.948 [2024-12-15 18:42:12.321261] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:11:12.466 114.33 IOPS, 343.00 MiB/s [2024-12-15T18:42:12.907Z] [2024-12-15 18:42:12.863276] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:11:12.725 [2024-12-15 18:42:12.968334] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:11:12.725 [2024-12-15 18:42:12.970605] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:12.985 18:42:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:12.985 18:42:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:12.985 18:42:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:12.985 18:42:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:12.985 18:42:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:12.985 18:42:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:12.985 18:42:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:12.985 18:42:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:12.985 18:42:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.985 18:42:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:12.985 18:42:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.985 18:42:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:12.985 "name": "raid_bdev1", 00:11:12.985 "uuid": "920d9b36-9498-48f9-a56c-edae15c1d262", 00:11:12.985 "strip_size_kb": 0, 00:11:12.985 "state": "online", 00:11:12.985 "raid_level": "raid1", 00:11:12.985 "superblock": false, 00:11:12.985 "num_base_bdevs": 2, 00:11:12.985 "num_base_bdevs_discovered": 2, 00:11:12.985 "num_base_bdevs_operational": 2, 00:11:12.985 "base_bdevs_list": [ 00:11:12.985 { 00:11:12.985 "name": "spare", 00:11:12.985 "uuid": "4d4ca5ba-f4f7-54e2-ab4a-083e89a044ed", 00:11:12.985 "is_configured": true, 00:11:12.985 "data_offset": 0, 00:11:12.985 "data_size": 65536 00:11:12.985 }, 00:11:12.985 { 00:11:12.985 "name": "BaseBdev2", 00:11:12.985 "uuid": "53276aa4-7b2e-5525-a6dc-e61142c54508", 00:11:12.985 "is_configured": true, 00:11:12.985 "data_offset": 0, 00:11:12.985 "data_size": 65536 00:11:12.985 } 00:11:12.985 ] 00:11:12.985 }' 00:11:12.985 18:42:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:12.985 18:42:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:11:12.985 18:42:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:13.244 18:42:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:11:13.244 18:42:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:11:13.244 18:42:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:13.244 18:42:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:13.244 18:42:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:13.244 18:42:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:13.244 18:42:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:13.245 18:42:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:13.245 18:42:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:13.245 18:42:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.245 18:42:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:13.245 102.14 IOPS, 306.43 MiB/s [2024-12-15T18:42:13.686Z] 18:42:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.245 18:42:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:13.245 "name": "raid_bdev1", 00:11:13.245 "uuid": "920d9b36-9498-48f9-a56c-edae15c1d262", 00:11:13.245 "strip_size_kb": 0, 00:11:13.245 "state": "online", 00:11:13.245 "raid_level": "raid1", 00:11:13.245 "superblock": false, 00:11:13.245 "num_base_bdevs": 2, 00:11:13.245 "num_base_bdevs_discovered": 2, 00:11:13.245 "num_base_bdevs_operational": 2, 00:11:13.245 "base_bdevs_list": [ 00:11:13.245 { 00:11:13.245 "name": "spare", 00:11:13.245 "uuid": "4d4ca5ba-f4f7-54e2-ab4a-083e89a044ed", 00:11:13.245 "is_configured": true, 00:11:13.245 "data_offset": 0, 00:11:13.245 "data_size": 65536 00:11:13.245 }, 00:11:13.245 { 00:11:13.245 "name": "BaseBdev2", 00:11:13.245 "uuid": "53276aa4-7b2e-5525-a6dc-e61142c54508", 00:11:13.245 "is_configured": true, 00:11:13.245 "data_offset": 0, 00:11:13.245 "data_size": 65536 00:11:13.245 } 00:11:13.245 ] 00:11:13.245 }' 00:11:13.245 18:42:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:13.245 18:42:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:13.245 18:42:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:13.245 18:42:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:13.245 18:42:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:13.245 18:42:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:13.245 18:42:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:13.245 18:42:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:13.245 18:42:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:13.245 18:42:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:13.245 18:42:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:13.245 18:42:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:13.245 18:42:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:13.245 18:42:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:13.245 18:42:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:13.245 18:42:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.245 18:42:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:13.245 18:42:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:13.245 18:42:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.245 18:42:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:13.245 "name": "raid_bdev1", 00:11:13.245 "uuid": "920d9b36-9498-48f9-a56c-edae15c1d262", 00:11:13.245 "strip_size_kb": 0, 00:11:13.245 "state": "online", 00:11:13.245 "raid_level": "raid1", 00:11:13.245 "superblock": false, 00:11:13.245 "num_base_bdevs": 2, 00:11:13.245 "num_base_bdevs_discovered": 2, 00:11:13.245 "num_base_bdevs_operational": 2, 00:11:13.245 "base_bdevs_list": [ 00:11:13.245 { 00:11:13.245 "name": "spare", 00:11:13.245 "uuid": "4d4ca5ba-f4f7-54e2-ab4a-083e89a044ed", 00:11:13.245 "is_configured": true, 00:11:13.245 "data_offset": 0, 00:11:13.245 "data_size": 65536 00:11:13.245 }, 00:11:13.245 { 00:11:13.245 "name": "BaseBdev2", 00:11:13.245 "uuid": "53276aa4-7b2e-5525-a6dc-e61142c54508", 00:11:13.245 "is_configured": true, 00:11:13.245 "data_offset": 0, 00:11:13.245 "data_size": 65536 00:11:13.245 } 00:11:13.245 ] 00:11:13.245 }' 00:11:13.245 18:42:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:13.245 18:42:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:13.814 18:42:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:13.814 18:42:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.814 18:42:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:13.814 [2024-12-15 18:42:14.045457] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:13.814 [2024-12-15 18:42:14.045507] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:13.814 00:11:13.814 Latency(us) 00:11:13.814 [2024-12-15T18:42:14.255Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:13.814 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:11:13.814 raid_bdev1 : 7.67 95.64 286.92 0.00 0.00 15188.98 287.97 109894.43 00:11:13.814 [2024-12-15T18:42:14.255Z] =================================================================================================================== 00:11:13.814 [2024-12-15T18:42:14.255Z] Total : 95.64 286.92 0.00 0.00 15188.98 287.97 109894.43 00:11:13.814 [2024-12-15 18:42:14.148993] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:13.814 [2024-12-15 18:42:14.149057] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:13.814 [2024-12-15 18:42:14.149129] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:13.814 [2024-12-15 18:42:14.149144] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:11:13.814 { 00:11:13.814 "results": [ 00:11:13.814 { 00:11:13.814 "job": "raid_bdev1", 00:11:13.814 "core_mask": "0x1", 00:11:13.814 "workload": "randrw", 00:11:13.814 "percentage": 50, 00:11:13.814 "status": "finished", 00:11:13.814 "queue_depth": 2, 00:11:13.814 "io_size": 3145728, 00:11:13.814 "runtime": 7.674525, 00:11:13.814 "iops": 95.64109830901587, 00:11:13.814 "mibps": 286.9232949270476, 00:11:13.814 "io_failed": 0, 00:11:13.814 "io_timeout": 0, 00:11:13.814 "avg_latency_us": 15188.983239532146, 00:11:13.814 "min_latency_us": 287.97205240174674, 00:11:13.814 "max_latency_us": 109894.42794759825 00:11:13.814 } 00:11:13.814 ], 00:11:13.814 "core_count": 1 00:11:13.814 } 00:11:13.814 18:42:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.814 18:42:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:13.814 18:42:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.814 18:42:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:13.814 18:42:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:11:13.814 18:42:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.814 18:42:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:11:13.814 18:42:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:11:13.814 18:42:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:11:13.814 18:42:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:11:13.814 18:42:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:13.814 18:42:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:11:13.814 18:42:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:13.814 18:42:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:11:13.814 18:42:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:13.814 18:42:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:11:13.814 18:42:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:13.814 18:42:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:13.814 18:42:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:11:14.074 /dev/nbd0 00:11:14.074 18:42:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:14.074 18:42:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:14.074 18:42:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:11:14.074 18:42:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:11:14.074 18:42:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:14.074 18:42:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:14.074 18:42:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:11:14.074 18:42:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:11:14.074 18:42:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:14.074 18:42:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:14.074 18:42:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:14.074 1+0 records in 00:11:14.074 1+0 records out 00:11:14.074 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00037507 s, 10.9 MB/s 00:11:14.074 18:42:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:14.074 18:42:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:11:14.074 18:42:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:14.074 18:42:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:14.074 18:42:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:11:14.074 18:42:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:14.074 18:42:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:14.074 18:42:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:11:14.074 18:42:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:11:14.074 18:42:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:11:14.074 18:42:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:14.074 18:42:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:11:14.074 18:42:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:14.074 18:42:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:11:14.074 18:42:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:14.074 18:42:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:11:14.074 18:42:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:14.074 18:42:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:14.074 18:42:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:11:14.334 /dev/nbd1 00:11:14.334 18:42:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:14.334 18:42:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:14.334 18:42:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:11:14.334 18:42:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:11:14.334 18:42:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:14.334 18:42:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:14.334 18:42:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:11:14.334 18:42:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:11:14.334 18:42:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:14.334 18:42:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:14.334 18:42:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:14.334 1+0 records in 00:11:14.334 1+0 records out 00:11:14.334 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000646335 s, 6.3 MB/s 00:11:14.334 18:42:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:14.334 18:42:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:11:14.334 18:42:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:14.334 18:42:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:14.334 18:42:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:11:14.334 18:42:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:14.334 18:42:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:14.334 18:42:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:11:14.593 18:42:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:11:14.593 18:42:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:14.593 18:42:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:11:14.593 18:42:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:14.593 18:42:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:11:14.593 18:42:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:14.593 18:42:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:11:14.593 18:42:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:14.593 18:42:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:14.593 18:42:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:14.593 18:42:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:14.593 18:42:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:14.593 18:42:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:14.593 18:42:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:11:14.593 18:42:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:11:14.593 18:42:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:11:14.593 18:42:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:14.593 18:42:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:11:14.593 18:42:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:14.593 18:42:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:11:14.593 18:42:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:14.593 18:42:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:11:14.852 18:42:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:14.852 18:42:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:14.852 18:42:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:14.852 18:42:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:14.852 18:42:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:14.852 18:42:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:14.852 18:42:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:11:14.852 18:42:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:11:14.852 18:42:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:11:14.852 18:42:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 89054 00:11:14.852 18:42:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 89054 ']' 00:11:14.852 18:42:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 89054 00:11:14.852 18:42:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:11:14.852 18:42:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:14.852 18:42:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89054 00:11:14.852 18:42:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:14.852 18:42:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:14.852 18:42:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89054' 00:11:14.852 killing process with pid 89054 00:11:14.852 18:42:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 89054 00:11:14.852 Received shutdown signal, test time was about 8.794655 seconds 00:11:14.852 00:11:14.852 Latency(us) 00:11:14.852 [2024-12-15T18:42:15.294Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:14.853 [2024-12-15T18:42:15.294Z] =================================================================================================================== 00:11:14.853 [2024-12-15T18:42:15.294Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:14.853 [2024-12-15 18:42:15.263275] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:14.853 18:42:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 89054 00:11:14.853 [2024-12-15 18:42:15.290333] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:15.111 18:42:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:11:15.111 00:11:15.111 real 0m10.680s 00:11:15.111 user 0m13.668s 00:11:15.111 sys 0m1.481s 00:11:15.111 18:42:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:15.111 18:42:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:15.111 ************************************ 00:11:15.111 END TEST raid_rebuild_test_io 00:11:15.111 ************************************ 00:11:15.371 18:42:15 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:11:15.371 18:42:15 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:11:15.371 18:42:15 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:15.371 18:42:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:15.371 ************************************ 00:11:15.371 START TEST raid_rebuild_test_sb_io 00:11:15.371 ************************************ 00:11:15.371 18:42:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true true true 00:11:15.371 18:42:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:11:15.371 18:42:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:11:15.371 18:42:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:11:15.371 18:42:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:11:15.371 18:42:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:11:15.371 18:42:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:11:15.371 18:42:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:15.371 18:42:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:11:15.371 18:42:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:15.371 18:42:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:15.371 18:42:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:11:15.371 18:42:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:15.371 18:42:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:15.371 18:42:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:15.371 18:42:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:11:15.371 18:42:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:11:15.371 18:42:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:11:15.371 18:42:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:11:15.371 18:42:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:11:15.371 18:42:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:11:15.371 18:42:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:11:15.371 18:42:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:11:15.371 18:42:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:11:15.371 18:42:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:11:15.371 18:42:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=89423 00:11:15.371 18:42:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:11:15.371 18:42:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 89423 00:11:15.371 18:42:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 89423 ']' 00:11:15.371 18:42:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:15.371 18:42:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:15.371 18:42:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:15.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:15.371 18:42:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:15.371 18:42:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:15.371 [2024-12-15 18:42:15.670979] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:11:15.371 [2024-12-15 18:42:15.671184] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:11:15.371 Zero copy mechanism will not be used. 00:11:15.371 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89423 ] 00:11:15.630 [2024-12-15 18:42:15.840371] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:15.630 [2024-12-15 18:42:15.866180] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:11:15.630 [2024-12-15 18:42:15.908789] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:15.630 [2024-12-15 18:42:15.908923] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:16.200 18:42:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:16.200 18:42:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:11:16.200 18:42:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:16.200 18:42:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:16.200 18:42:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.200 18:42:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:16.200 BaseBdev1_malloc 00:11:16.200 18:42:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.200 18:42:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:11:16.200 18:42:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.200 18:42:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:16.200 [2024-12-15 18:42:16.536468] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:11:16.200 [2024-12-15 18:42:16.536651] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:16.200 [2024-12-15 18:42:16.536683] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:16.200 [2024-12-15 18:42:16.536696] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:16.200 [2024-12-15 18:42:16.538871] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:16.200 [2024-12-15 18:42:16.538923] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:16.200 BaseBdev1 00:11:16.200 18:42:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.200 18:42:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:16.200 18:42:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:16.200 18:42:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.200 18:42:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:16.200 BaseBdev2_malloc 00:11:16.200 18:42:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.200 18:42:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:11:16.200 18:42:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.200 18:42:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:16.200 [2024-12-15 18:42:16.565152] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:11:16.200 [2024-12-15 18:42:16.565215] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:16.200 [2024-12-15 18:42:16.565235] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:16.200 [2024-12-15 18:42:16.565244] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:16.200 [2024-12-15 18:42:16.567329] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:16.200 [2024-12-15 18:42:16.567368] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:16.200 BaseBdev2 00:11:16.200 18:42:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.200 18:42:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:11:16.200 18:42:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.200 18:42:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:16.200 spare_malloc 00:11:16.200 18:42:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.200 18:42:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:11:16.200 18:42:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.200 18:42:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:16.200 spare_delay 00:11:16.200 18:42:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.200 18:42:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:16.200 18:42:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.200 18:42:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:16.200 [2024-12-15 18:42:16.605792] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:16.200 [2024-12-15 18:42:16.605883] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:16.200 [2024-12-15 18:42:16.605906] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:11:16.200 [2024-12-15 18:42:16.605915] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:16.200 [2024-12-15 18:42:16.607923] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:16.200 [2024-12-15 18:42:16.607960] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:16.200 spare 00:11:16.200 18:42:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.200 18:42:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:11:16.200 18:42:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.200 18:42:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:16.200 [2024-12-15 18:42:16.617830] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:16.200 [2024-12-15 18:42:16.619541] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:16.200 [2024-12-15 18:42:16.619690] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:11:16.200 [2024-12-15 18:42:16.619702] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:16.200 [2024-12-15 18:42:16.619987] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:11:16.200 [2024-12-15 18:42:16.620131] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:11:16.200 [2024-12-15 18:42:16.620144] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:11:16.200 [2024-12-15 18:42:16.620255] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:16.200 18:42:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.200 18:42:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:16.200 18:42:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:16.200 18:42:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:16.200 18:42:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:16.200 18:42:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:16.200 18:42:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:16.200 18:42:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:16.200 18:42:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:16.200 18:42:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:16.200 18:42:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:16.200 18:42:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.200 18:42:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.200 18:42:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:16.200 18:42:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:16.460 18:42:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.460 18:42:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:16.460 "name": "raid_bdev1", 00:11:16.460 "uuid": "f6b4ab3c-8bc3-49ee-818a-6e6a18d3b5ea", 00:11:16.460 "strip_size_kb": 0, 00:11:16.460 "state": "online", 00:11:16.460 "raid_level": "raid1", 00:11:16.460 "superblock": true, 00:11:16.460 "num_base_bdevs": 2, 00:11:16.460 "num_base_bdevs_discovered": 2, 00:11:16.460 "num_base_bdevs_operational": 2, 00:11:16.460 "base_bdevs_list": [ 00:11:16.460 { 00:11:16.460 "name": "BaseBdev1", 00:11:16.460 "uuid": "b8b14c0b-c4fd-54b0-9800-6757eee59fa0", 00:11:16.460 "is_configured": true, 00:11:16.460 "data_offset": 2048, 00:11:16.460 "data_size": 63488 00:11:16.460 }, 00:11:16.460 { 00:11:16.460 "name": "BaseBdev2", 00:11:16.460 "uuid": "24d8b898-7bfe-5546-a91e-a4dec6298c8a", 00:11:16.460 "is_configured": true, 00:11:16.460 "data_offset": 2048, 00:11:16.460 "data_size": 63488 00:11:16.460 } 00:11:16.460 ] 00:11:16.460 }' 00:11:16.460 18:42:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:16.460 18:42:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:16.719 18:42:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:16.719 18:42:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:11:16.719 18:42:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.719 18:42:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:16.719 [2024-12-15 18:42:17.105311] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:16.719 18:42:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.719 18:42:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:11:16.719 18:42:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.719 18:42:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.719 18:42:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:16.719 18:42:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:11:16.980 18:42:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.980 18:42:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:11:16.980 18:42:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:11:16.980 18:42:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:11:16.980 18:42:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:16.980 18:42:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.980 18:42:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:16.980 [2024-12-15 18:42:17.196833] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:16.980 18:42:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.980 18:42:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:16.980 18:42:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:16.980 18:42:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:16.980 18:42:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:16.980 18:42:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:16.980 18:42:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:16.980 18:42:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:16.980 18:42:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:16.980 18:42:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:16.980 18:42:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:16.980 18:42:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.980 18:42:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:16.980 18:42:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.980 18:42:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:16.980 18:42:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.980 18:42:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:16.980 "name": "raid_bdev1", 00:11:16.980 "uuid": "f6b4ab3c-8bc3-49ee-818a-6e6a18d3b5ea", 00:11:16.980 "strip_size_kb": 0, 00:11:16.980 "state": "online", 00:11:16.980 "raid_level": "raid1", 00:11:16.980 "superblock": true, 00:11:16.980 "num_base_bdevs": 2, 00:11:16.980 "num_base_bdevs_discovered": 1, 00:11:16.980 "num_base_bdevs_operational": 1, 00:11:16.980 "base_bdevs_list": [ 00:11:16.980 { 00:11:16.980 "name": null, 00:11:16.980 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:16.980 "is_configured": false, 00:11:16.980 "data_offset": 0, 00:11:16.980 "data_size": 63488 00:11:16.980 }, 00:11:16.980 { 00:11:16.980 "name": "BaseBdev2", 00:11:16.980 "uuid": "24d8b898-7bfe-5546-a91e-a4dec6298c8a", 00:11:16.980 "is_configured": true, 00:11:16.980 "data_offset": 2048, 00:11:16.980 "data_size": 63488 00:11:16.980 } 00:11:16.980 ] 00:11:16.980 }' 00:11:16.980 18:42:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:16.980 18:42:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:16.980 [2024-12-15 18:42:17.301184] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:16.980 I/O size of 3145728 is greater than zero copy threshold (65536). 00:11:16.980 Zero copy mechanism will not be used. 00:11:16.980 Running I/O for 60 seconds... 00:11:17.241 18:42:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:17.241 18:42:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.241 18:42:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:17.241 [2024-12-15 18:42:17.652416] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:17.241 18:42:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.241 18:42:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:11:17.501 [2024-12-15 18:42:17.695279] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:17.501 [2024-12-15 18:42:17.697346] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:17.501 [2024-12-15 18:42:17.804414] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:11:17.501 [2024-12-15 18:42:17.805071] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:11:17.761 [2024-12-15 18:42:17.942857] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:11:17.761 [2024-12-15 18:42:17.943281] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:11:18.021 [2024-12-15 18:42:18.268556] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:11:18.021 225.00 IOPS, 675.00 MiB/s [2024-12-15T18:42:18.462Z] [2024-12-15 18:42:18.377128] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:11:18.280 18:42:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:18.280 18:42:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:18.280 18:42:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:18.281 18:42:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:18.281 18:42:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:18.281 18:42:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.281 18:42:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:18.281 18:42:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.281 18:42:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:18.281 18:42:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.541 [2024-12-15 18:42:18.721136] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:11:18.541 18:42:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:18.541 "name": "raid_bdev1", 00:11:18.541 "uuid": "f6b4ab3c-8bc3-49ee-818a-6e6a18d3b5ea", 00:11:18.541 "strip_size_kb": 0, 00:11:18.541 "state": "online", 00:11:18.541 "raid_level": "raid1", 00:11:18.541 "superblock": true, 00:11:18.541 "num_base_bdevs": 2, 00:11:18.541 "num_base_bdevs_discovered": 2, 00:11:18.541 "num_base_bdevs_operational": 2, 00:11:18.541 "process": { 00:11:18.541 "type": "rebuild", 00:11:18.541 "target": "spare", 00:11:18.541 "progress": { 00:11:18.541 "blocks": 12288, 00:11:18.541 "percent": 19 00:11:18.541 } 00:11:18.541 }, 00:11:18.541 "base_bdevs_list": [ 00:11:18.541 { 00:11:18.541 "name": "spare", 00:11:18.541 "uuid": "1a040d7c-411c-5d08-a321-572a448a0f02", 00:11:18.541 "is_configured": true, 00:11:18.541 "data_offset": 2048, 00:11:18.541 "data_size": 63488 00:11:18.541 }, 00:11:18.541 { 00:11:18.541 "name": "BaseBdev2", 00:11:18.541 "uuid": "24d8b898-7bfe-5546-a91e-a4dec6298c8a", 00:11:18.541 "is_configured": true, 00:11:18.541 "data_offset": 2048, 00:11:18.541 "data_size": 63488 00:11:18.541 } 00:11:18.541 ] 00:11:18.541 }' 00:11:18.541 18:42:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:18.541 18:42:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:18.541 18:42:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:18.541 18:42:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:18.541 18:42:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:11:18.541 18:42:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.541 18:42:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:18.541 [2024-12-15 18:42:18.816156] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:18.541 [2024-12-15 18:42:18.846845] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:11:18.541 [2024-12-15 18:42:18.847110] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:11:18.541 [2024-12-15 18:42:18.959393] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:11:18.541 [2024-12-15 18:42:18.973241] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:18.541 [2024-12-15 18:42:18.973315] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:18.541 [2024-12-15 18:42:18.973332] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:11:18.801 [2024-12-15 18:42:18.991881] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005ee0 00:11:18.801 18:42:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.801 18:42:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:18.801 18:42:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:18.801 18:42:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:18.801 18:42:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:18.801 18:42:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:18.801 18:42:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:18.801 18:42:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:18.801 18:42:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:18.801 18:42:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:18.801 18:42:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:18.801 18:42:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.801 18:42:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:18.801 18:42:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.801 18:42:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:18.801 18:42:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.801 18:42:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:18.801 "name": "raid_bdev1", 00:11:18.801 "uuid": "f6b4ab3c-8bc3-49ee-818a-6e6a18d3b5ea", 00:11:18.801 "strip_size_kb": 0, 00:11:18.801 "state": "online", 00:11:18.801 "raid_level": "raid1", 00:11:18.801 "superblock": true, 00:11:18.801 "num_base_bdevs": 2, 00:11:18.801 "num_base_bdevs_discovered": 1, 00:11:18.801 "num_base_bdevs_operational": 1, 00:11:18.802 "base_bdevs_list": [ 00:11:18.802 { 00:11:18.802 "name": null, 00:11:18.802 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:18.802 "is_configured": false, 00:11:18.802 "data_offset": 0, 00:11:18.802 "data_size": 63488 00:11:18.802 }, 00:11:18.802 { 00:11:18.802 "name": "BaseBdev2", 00:11:18.802 "uuid": "24d8b898-7bfe-5546-a91e-a4dec6298c8a", 00:11:18.802 "is_configured": true, 00:11:18.802 "data_offset": 2048, 00:11:18.802 "data_size": 63488 00:11:18.802 } 00:11:18.802 ] 00:11:18.802 }' 00:11:18.802 18:42:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:18.802 18:42:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:19.322 188.50 IOPS, 565.50 MiB/s [2024-12-15T18:42:19.763Z] 18:42:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:19.322 18:42:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:19.322 18:42:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:19.322 18:42:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:19.322 18:42:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:19.322 18:42:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:19.322 18:42:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.322 18:42:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.322 18:42:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:19.322 18:42:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.322 18:42:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:19.322 "name": "raid_bdev1", 00:11:19.322 "uuid": "f6b4ab3c-8bc3-49ee-818a-6e6a18d3b5ea", 00:11:19.322 "strip_size_kb": 0, 00:11:19.322 "state": "online", 00:11:19.322 "raid_level": "raid1", 00:11:19.322 "superblock": true, 00:11:19.322 "num_base_bdevs": 2, 00:11:19.322 "num_base_bdevs_discovered": 1, 00:11:19.322 "num_base_bdevs_operational": 1, 00:11:19.322 "base_bdevs_list": [ 00:11:19.322 { 00:11:19.322 "name": null, 00:11:19.322 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:19.322 "is_configured": false, 00:11:19.322 "data_offset": 0, 00:11:19.322 "data_size": 63488 00:11:19.322 }, 00:11:19.322 { 00:11:19.322 "name": "BaseBdev2", 00:11:19.322 "uuid": "24d8b898-7bfe-5546-a91e-a4dec6298c8a", 00:11:19.322 "is_configured": true, 00:11:19.322 "data_offset": 2048, 00:11:19.322 "data_size": 63488 00:11:19.322 } 00:11:19.322 ] 00:11:19.322 }' 00:11:19.322 18:42:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:19.322 18:42:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:19.322 18:42:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:19.322 18:42:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:19.322 18:42:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:19.322 18:42:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.322 18:42:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:19.322 [2024-12-15 18:42:19.658993] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:19.322 18:42:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.322 18:42:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:11:19.322 [2024-12-15 18:42:19.698126] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:11:19.322 [2024-12-15 18:42:19.700192] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:19.582 [2024-12-15 18:42:19.818439] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:11:19.582 [2024-12-15 18:42:19.819134] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:11:19.842 [2024-12-15 18:42:20.045771] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:11:19.842 [2024-12-15 18:42:20.046129] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:11:20.102 [2024-12-15 18:42:20.281443] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:11:20.102 [2024-12-15 18:42:20.282064] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:11:20.102 199.00 IOPS, 597.00 MiB/s [2024-12-15T18:42:20.543Z] [2024-12-15 18:42:20.503167] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:11:20.102 [2024-12-15 18:42:20.503476] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:11:20.362 18:42:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:20.362 18:42:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:20.362 18:42:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:20.362 18:42:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:20.362 18:42:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:20.362 18:42:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.362 18:42:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:20.362 18:42:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.362 18:42:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:20.362 18:42:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.362 18:42:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:20.362 "name": "raid_bdev1", 00:11:20.362 "uuid": "f6b4ab3c-8bc3-49ee-818a-6e6a18d3b5ea", 00:11:20.362 "strip_size_kb": 0, 00:11:20.362 "state": "online", 00:11:20.362 "raid_level": "raid1", 00:11:20.362 "superblock": true, 00:11:20.362 "num_base_bdevs": 2, 00:11:20.362 "num_base_bdevs_discovered": 2, 00:11:20.362 "num_base_bdevs_operational": 2, 00:11:20.362 "process": { 00:11:20.362 "type": "rebuild", 00:11:20.362 "target": "spare", 00:11:20.362 "progress": { 00:11:20.362 "blocks": 10240, 00:11:20.362 "percent": 16 00:11:20.362 } 00:11:20.362 }, 00:11:20.362 "base_bdevs_list": [ 00:11:20.362 { 00:11:20.362 "name": "spare", 00:11:20.362 "uuid": "1a040d7c-411c-5d08-a321-572a448a0f02", 00:11:20.362 "is_configured": true, 00:11:20.362 "data_offset": 2048, 00:11:20.362 "data_size": 63488 00:11:20.362 }, 00:11:20.362 { 00:11:20.362 "name": "BaseBdev2", 00:11:20.362 "uuid": "24d8b898-7bfe-5546-a91e-a4dec6298c8a", 00:11:20.362 "is_configured": true, 00:11:20.362 "data_offset": 2048, 00:11:20.362 "data_size": 63488 00:11:20.362 } 00:11:20.362 ] 00:11:20.362 }' 00:11:20.362 18:42:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:20.362 18:42:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:20.362 18:42:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:20.622 [2024-12-15 18:42:20.839383] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:11:20.622 18:42:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:20.622 18:42:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:11:20.622 18:42:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:11:20.622 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:11:20.622 18:42:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:11:20.622 18:42:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:11:20.622 18:42:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:11:20.622 18:42:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=337 00:11:20.622 18:42:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:20.622 18:42:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:20.622 18:42:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:20.622 18:42:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:20.622 18:42:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:20.622 18:42:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:20.622 18:42:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.622 18:42:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.622 18:42:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:20.622 18:42:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:20.622 18:42:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.622 18:42:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:20.622 "name": "raid_bdev1", 00:11:20.622 "uuid": "f6b4ab3c-8bc3-49ee-818a-6e6a18d3b5ea", 00:11:20.622 "strip_size_kb": 0, 00:11:20.622 "state": "online", 00:11:20.622 "raid_level": "raid1", 00:11:20.622 "superblock": true, 00:11:20.622 "num_base_bdevs": 2, 00:11:20.622 "num_base_bdevs_discovered": 2, 00:11:20.622 "num_base_bdevs_operational": 2, 00:11:20.622 "process": { 00:11:20.622 "type": "rebuild", 00:11:20.622 "target": "spare", 00:11:20.622 "progress": { 00:11:20.622 "blocks": 14336, 00:11:20.622 "percent": 22 00:11:20.622 } 00:11:20.622 }, 00:11:20.622 "base_bdevs_list": [ 00:11:20.622 { 00:11:20.622 "name": "spare", 00:11:20.622 "uuid": "1a040d7c-411c-5d08-a321-572a448a0f02", 00:11:20.622 "is_configured": true, 00:11:20.622 "data_offset": 2048, 00:11:20.622 "data_size": 63488 00:11:20.622 }, 00:11:20.622 { 00:11:20.622 "name": "BaseBdev2", 00:11:20.622 "uuid": "24d8b898-7bfe-5546-a91e-a4dec6298c8a", 00:11:20.622 "is_configured": true, 00:11:20.622 "data_offset": 2048, 00:11:20.622 "data_size": 63488 00:11:20.622 } 00:11:20.622 ] 00:11:20.622 }' 00:11:20.622 18:42:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:20.622 18:42:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:20.622 18:42:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:20.622 [2024-12-15 18:42:20.964619] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:11:20.622 18:42:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:20.622 18:42:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:20.882 [2024-12-15 18:42:21.288419] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:11:21.142 166.75 IOPS, 500.25 MiB/s [2024-12-15T18:42:21.583Z] [2024-12-15 18:42:21.515905] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:11:21.711 [2024-12-15 18:42:21.974315] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:11:21.711 18:42:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:21.711 18:42:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:21.711 18:42:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:21.711 18:42:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:21.711 18:42:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:21.711 18:42:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:21.711 18:42:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:21.711 18:42:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:21.711 18:42:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.711 18:42:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:21.711 18:42:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.711 18:42:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:21.711 "name": "raid_bdev1", 00:11:21.711 "uuid": "f6b4ab3c-8bc3-49ee-818a-6e6a18d3b5ea", 00:11:21.711 "strip_size_kb": 0, 00:11:21.711 "state": "online", 00:11:21.711 "raid_level": "raid1", 00:11:21.711 "superblock": true, 00:11:21.711 "num_base_bdevs": 2, 00:11:21.711 "num_base_bdevs_discovered": 2, 00:11:21.711 "num_base_bdevs_operational": 2, 00:11:21.711 "process": { 00:11:21.711 "type": "rebuild", 00:11:21.711 "target": "spare", 00:11:21.711 "progress": { 00:11:21.711 "blocks": 28672, 00:11:21.711 "percent": 45 00:11:21.711 } 00:11:21.711 }, 00:11:21.711 "base_bdevs_list": [ 00:11:21.711 { 00:11:21.711 "name": "spare", 00:11:21.711 "uuid": "1a040d7c-411c-5d08-a321-572a448a0f02", 00:11:21.711 "is_configured": true, 00:11:21.711 "data_offset": 2048, 00:11:21.711 "data_size": 63488 00:11:21.711 }, 00:11:21.711 { 00:11:21.711 "name": "BaseBdev2", 00:11:21.711 "uuid": "24d8b898-7bfe-5546-a91e-a4dec6298c8a", 00:11:21.711 "is_configured": true, 00:11:21.711 "data_offset": 2048, 00:11:21.711 "data_size": 63488 00:11:21.711 } 00:11:21.711 ] 00:11:21.711 }' 00:11:21.711 18:42:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:21.711 18:42:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:21.711 18:42:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:21.711 18:42:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:21.711 18:42:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:21.971 [2024-12-15 18:42:22.212117] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:11:22.230 141.40 IOPS, 424.20 MiB/s [2024-12-15T18:42:22.671Z] [2024-12-15 18:42:22.435124] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:11:22.489 [2024-12-15 18:42:22.882405] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:11:22.748 18:42:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:22.749 18:42:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:22.749 18:42:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:22.749 18:42:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:22.749 18:42:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:22.749 18:42:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:22.749 18:42:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.749 18:42:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:22.749 18:42:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.749 18:42:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:22.749 18:42:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.008 18:42:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:23.008 "name": "raid_bdev1", 00:11:23.008 "uuid": "f6b4ab3c-8bc3-49ee-818a-6e6a18d3b5ea", 00:11:23.008 "strip_size_kb": 0, 00:11:23.008 "state": "online", 00:11:23.008 "raid_level": "raid1", 00:11:23.008 "superblock": true, 00:11:23.008 "num_base_bdevs": 2, 00:11:23.008 "num_base_bdevs_discovered": 2, 00:11:23.008 "num_base_bdevs_operational": 2, 00:11:23.008 "process": { 00:11:23.008 "type": "rebuild", 00:11:23.008 "target": "spare", 00:11:23.008 "progress": { 00:11:23.008 "blocks": 43008, 00:11:23.008 "percent": 67 00:11:23.008 } 00:11:23.008 }, 00:11:23.008 "base_bdevs_list": [ 00:11:23.008 { 00:11:23.008 "name": "spare", 00:11:23.008 "uuid": "1a040d7c-411c-5d08-a321-572a448a0f02", 00:11:23.008 "is_configured": true, 00:11:23.008 "data_offset": 2048, 00:11:23.008 "data_size": 63488 00:11:23.008 }, 00:11:23.008 { 00:11:23.008 "name": "BaseBdev2", 00:11:23.008 "uuid": "24d8b898-7bfe-5546-a91e-a4dec6298c8a", 00:11:23.008 "is_configured": true, 00:11:23.008 "data_offset": 2048, 00:11:23.008 "data_size": 63488 00:11:23.008 } 00:11:23.008 ] 00:11:23.008 }' 00:11:23.008 18:42:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:23.008 [2024-12-15 18:42:23.208413] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:11:23.008 18:42:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:23.008 18:42:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:23.008 18:42:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:23.008 18:42:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:23.946 124.33 IOPS, 373.00 MiB/s [2024-12-15T18:42:24.387Z] 18:42:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:23.946 18:42:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:23.946 18:42:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:23.946 18:42:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:23.946 18:42:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:23.946 18:42:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:23.946 18:42:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:23.946 18:42:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:23.946 18:42:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.946 18:42:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:23.946 112.00 IOPS, 336.00 MiB/s [2024-12-15T18:42:24.387Z] [2024-12-15 18:42:24.310023] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:11:23.946 18:42:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.946 18:42:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:23.946 "name": "raid_bdev1", 00:11:23.946 "uuid": "f6b4ab3c-8bc3-49ee-818a-6e6a18d3b5ea", 00:11:23.946 "strip_size_kb": 0, 00:11:23.946 "state": "online", 00:11:23.946 "raid_level": "raid1", 00:11:23.946 "superblock": true, 00:11:23.946 "num_base_bdevs": 2, 00:11:23.946 "num_base_bdevs_discovered": 2, 00:11:23.946 "num_base_bdevs_operational": 2, 00:11:23.946 "process": { 00:11:23.946 "type": "rebuild", 00:11:23.946 "target": "spare", 00:11:23.946 "progress": { 00:11:23.946 "blocks": 61440, 00:11:23.946 "percent": 96 00:11:23.946 } 00:11:23.946 }, 00:11:23.946 "base_bdevs_list": [ 00:11:23.946 { 00:11:23.946 "name": "spare", 00:11:23.946 "uuid": "1a040d7c-411c-5d08-a321-572a448a0f02", 00:11:23.946 "is_configured": true, 00:11:23.946 "data_offset": 2048, 00:11:23.946 "data_size": 63488 00:11:23.946 }, 00:11:23.946 { 00:11:23.946 "name": "BaseBdev2", 00:11:23.946 "uuid": "24d8b898-7bfe-5546-a91e-a4dec6298c8a", 00:11:23.946 "is_configured": true, 00:11:23.946 "data_offset": 2048, 00:11:23.946 "data_size": 63488 00:11:23.946 } 00:11:23.946 ] 00:11:23.946 }' 00:11:23.946 18:42:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:24.206 18:42:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:24.206 18:42:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:24.206 [2024-12-15 18:42:24.407827] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:11:24.206 [2024-12-15 18:42:24.409549] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:24.206 18:42:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:24.206 18:42:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:25.152 102.00 IOPS, 306.00 MiB/s [2024-12-15T18:42:25.593Z] 18:42:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:25.152 18:42:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:25.152 18:42:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:25.152 18:42:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:25.152 18:42:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:25.152 18:42:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:25.152 18:42:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.152 18:42:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:25.152 18:42:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.152 18:42:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:25.152 18:42:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.152 18:42:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:25.152 "name": "raid_bdev1", 00:11:25.152 "uuid": "f6b4ab3c-8bc3-49ee-818a-6e6a18d3b5ea", 00:11:25.152 "strip_size_kb": 0, 00:11:25.152 "state": "online", 00:11:25.152 "raid_level": "raid1", 00:11:25.152 "superblock": true, 00:11:25.152 "num_base_bdevs": 2, 00:11:25.152 "num_base_bdevs_discovered": 2, 00:11:25.152 "num_base_bdevs_operational": 2, 00:11:25.152 "base_bdevs_list": [ 00:11:25.152 { 00:11:25.152 "name": "spare", 00:11:25.152 "uuid": "1a040d7c-411c-5d08-a321-572a448a0f02", 00:11:25.152 "is_configured": true, 00:11:25.152 "data_offset": 2048, 00:11:25.152 "data_size": 63488 00:11:25.152 }, 00:11:25.152 { 00:11:25.152 "name": "BaseBdev2", 00:11:25.152 "uuid": "24d8b898-7bfe-5546-a91e-a4dec6298c8a", 00:11:25.152 "is_configured": true, 00:11:25.152 "data_offset": 2048, 00:11:25.152 "data_size": 63488 00:11:25.152 } 00:11:25.152 ] 00:11:25.152 }' 00:11:25.152 18:42:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:25.152 18:42:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:11:25.152 18:42:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:25.152 18:42:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:11:25.152 18:42:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:11:25.152 18:42:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:25.412 18:42:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:25.412 18:42:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:25.412 18:42:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:25.412 18:42:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:25.412 18:42:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.412 18:42:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:25.412 18:42:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.412 18:42:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:25.412 18:42:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.412 18:42:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:25.412 "name": "raid_bdev1", 00:11:25.412 "uuid": "f6b4ab3c-8bc3-49ee-818a-6e6a18d3b5ea", 00:11:25.412 "strip_size_kb": 0, 00:11:25.412 "state": "online", 00:11:25.412 "raid_level": "raid1", 00:11:25.412 "superblock": true, 00:11:25.412 "num_base_bdevs": 2, 00:11:25.412 "num_base_bdevs_discovered": 2, 00:11:25.412 "num_base_bdevs_operational": 2, 00:11:25.412 "base_bdevs_list": [ 00:11:25.412 { 00:11:25.412 "name": "spare", 00:11:25.412 "uuid": "1a040d7c-411c-5d08-a321-572a448a0f02", 00:11:25.412 "is_configured": true, 00:11:25.412 "data_offset": 2048, 00:11:25.412 "data_size": 63488 00:11:25.412 }, 00:11:25.412 { 00:11:25.412 "name": "BaseBdev2", 00:11:25.412 "uuid": "24d8b898-7bfe-5546-a91e-a4dec6298c8a", 00:11:25.412 "is_configured": true, 00:11:25.412 "data_offset": 2048, 00:11:25.412 "data_size": 63488 00:11:25.412 } 00:11:25.412 ] 00:11:25.412 }' 00:11:25.412 18:42:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:25.412 18:42:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:25.412 18:42:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:25.412 18:42:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:25.412 18:42:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:25.412 18:42:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:25.412 18:42:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:25.412 18:42:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:25.412 18:42:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:25.412 18:42:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:25.412 18:42:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:25.412 18:42:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:25.412 18:42:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:25.412 18:42:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:25.412 18:42:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.412 18:42:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:25.412 18:42:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.412 18:42:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:25.412 18:42:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.412 18:42:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:25.412 "name": "raid_bdev1", 00:11:25.412 "uuid": "f6b4ab3c-8bc3-49ee-818a-6e6a18d3b5ea", 00:11:25.412 "strip_size_kb": 0, 00:11:25.412 "state": "online", 00:11:25.412 "raid_level": "raid1", 00:11:25.412 "superblock": true, 00:11:25.412 "num_base_bdevs": 2, 00:11:25.412 "num_base_bdevs_discovered": 2, 00:11:25.412 "num_base_bdevs_operational": 2, 00:11:25.412 "base_bdevs_list": [ 00:11:25.412 { 00:11:25.412 "name": "spare", 00:11:25.412 "uuid": "1a040d7c-411c-5d08-a321-572a448a0f02", 00:11:25.412 "is_configured": true, 00:11:25.412 "data_offset": 2048, 00:11:25.412 "data_size": 63488 00:11:25.412 }, 00:11:25.412 { 00:11:25.412 "name": "BaseBdev2", 00:11:25.412 "uuid": "24d8b898-7bfe-5546-a91e-a4dec6298c8a", 00:11:25.412 "is_configured": true, 00:11:25.412 "data_offset": 2048, 00:11:25.412 "data_size": 63488 00:11:25.412 } 00:11:25.412 ] 00:11:25.412 }' 00:11:25.412 18:42:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:25.412 18:42:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:25.981 18:42:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:25.981 18:42:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.981 18:42:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:25.981 [2024-12-15 18:42:26.204188] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:25.981 [2024-12-15 18:42:26.204236] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:25.981 00:11:25.981 Latency(us) 00:11:25.981 [2024-12-15T18:42:26.422Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:25.981 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:11:25.981 raid_bdev1 : 8.97 94.77 284.30 0.00 0.00 14784.51 282.61 108978.64 00:11:25.981 [2024-12-15T18:42:26.422Z] =================================================================================================================== 00:11:25.981 [2024-12-15T18:42:26.422Z] Total : 94.77 284.30 0.00 0.00 14784.51 282.61 108978.64 00:11:25.981 [2024-12-15 18:42:26.259700] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:25.981 [2024-12-15 18:42:26.259821] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:25.981 [2024-12-15 18:42:26.259949] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:25.981 [2024-12-15 18:42:26.259999] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:11:25.981 { 00:11:25.981 "results": [ 00:11:25.981 { 00:11:25.981 "job": "raid_bdev1", 00:11:25.981 "core_mask": "0x1", 00:11:25.981 "workload": "randrw", 00:11:25.981 "percentage": 50, 00:11:25.981 "status": "finished", 00:11:25.981 "queue_depth": 2, 00:11:25.981 "io_size": 3145728, 00:11:25.981 "runtime": 8.969507, 00:11:25.981 "iops": 94.76552055759586, 00:11:25.982 "mibps": 284.29656167278756, 00:11:25.982 "io_failed": 0, 00:11:25.982 "io_timeout": 0, 00:11:25.982 "avg_latency_us": 14784.506213203187, 00:11:25.982 "min_latency_us": 282.6061135371179, 00:11:25.982 "max_latency_us": 108978.64104803493 00:11:25.982 } 00:11:25.982 ], 00:11:25.982 "core_count": 1 00:11:25.982 } 00:11:25.982 18:42:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.982 18:42:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.982 18:42:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.982 18:42:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:25.982 18:42:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:11:25.982 18:42:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.982 18:42:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:11:25.982 18:42:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:11:25.982 18:42:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:11:25.982 18:42:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:11:25.982 18:42:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:25.982 18:42:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:11:25.982 18:42:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:25.982 18:42:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:11:25.982 18:42:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:25.982 18:42:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:11:25.982 18:42:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:25.982 18:42:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:25.982 18:42:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:11:26.241 /dev/nbd0 00:11:26.241 18:42:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:26.241 18:42:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:26.241 18:42:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:11:26.241 18:42:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:11:26.241 18:42:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:26.241 18:42:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:26.241 18:42:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:11:26.241 18:42:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:11:26.242 18:42:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:26.242 18:42:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:26.242 18:42:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:26.242 1+0 records in 00:11:26.242 1+0 records out 00:11:26.242 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000583347 s, 7.0 MB/s 00:11:26.242 18:42:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:26.242 18:42:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:11:26.242 18:42:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:26.242 18:42:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:26.242 18:42:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:11:26.242 18:42:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:26.242 18:42:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:26.242 18:42:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:11:26.242 18:42:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:11:26.242 18:42:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:11:26.242 18:42:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:26.242 18:42:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:11:26.242 18:42:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:26.242 18:42:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:11:26.242 18:42:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:26.242 18:42:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:11:26.242 18:42:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:26.242 18:42:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:26.242 18:42:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:11:26.501 /dev/nbd1 00:11:26.501 18:42:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:26.501 18:42:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:26.501 18:42:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:11:26.501 18:42:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:11:26.501 18:42:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:26.501 18:42:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:26.501 18:42:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:11:26.501 18:42:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:11:26.501 18:42:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:26.501 18:42:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:26.501 18:42:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:26.501 1+0 records in 00:11:26.501 1+0 records out 00:11:26.501 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000398155 s, 10.3 MB/s 00:11:26.501 18:42:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:26.501 18:42:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:11:26.501 18:42:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:26.501 18:42:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:26.501 18:42:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:11:26.501 18:42:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:26.501 18:42:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:26.501 18:42:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:11:26.501 18:42:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:11:26.501 18:42:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:26.501 18:42:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:11:26.501 18:42:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:26.502 18:42:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:11:26.502 18:42:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:26.502 18:42:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:11:26.761 18:42:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:26.761 18:42:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:26.761 18:42:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:26.761 18:42:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:26.761 18:42:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:26.761 18:42:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:26.761 18:42:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:11:26.761 18:42:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:11:26.761 18:42:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:11:26.761 18:42:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:26.761 18:42:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:11:26.761 18:42:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:26.761 18:42:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:11:26.761 18:42:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:26.761 18:42:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:11:27.021 18:42:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:27.021 18:42:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:27.021 18:42:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:27.021 18:42:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:27.021 18:42:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:27.021 18:42:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:27.021 18:42:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:11:27.021 18:42:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:11:27.021 18:42:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:11:27.021 18:42:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:11:27.021 18:42:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.021 18:42:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:27.021 18:42:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.021 18:42:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:27.021 18:42:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.021 18:42:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:27.021 [2024-12-15 18:42:27.437946] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:27.021 [2024-12-15 18:42:27.438011] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:27.021 [2024-12-15 18:42:27.438031] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:11:27.021 [2024-12-15 18:42:27.438043] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:27.021 [2024-12-15 18:42:27.440469] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:27.021 [2024-12-15 18:42:27.440570] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:27.021 [2024-12-15 18:42:27.440686] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:11:27.021 [2024-12-15 18:42:27.440768] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:27.021 [2024-12-15 18:42:27.440954] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:27.021 spare 00:11:27.021 18:42:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.021 18:42:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:11:27.021 18:42:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.021 18:42:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:27.281 [2024-12-15 18:42:27.540914] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:11:27.281 [2024-12-15 18:42:27.540959] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:27.281 [2024-12-15 18:42:27.541325] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002af30 00:11:27.281 [2024-12-15 18:42:27.541513] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:11:27.281 [2024-12-15 18:42:27.541536] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006600 00:11:27.281 [2024-12-15 18:42:27.541709] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:27.281 18:42:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.281 18:42:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:27.281 18:42:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:27.281 18:42:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:27.281 18:42:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:27.281 18:42:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:27.281 18:42:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:27.281 18:42:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:27.281 18:42:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:27.281 18:42:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:27.281 18:42:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:27.281 18:42:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.281 18:42:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:27.281 18:42:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.281 18:42:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:27.281 18:42:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.281 18:42:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:27.281 "name": "raid_bdev1", 00:11:27.281 "uuid": "f6b4ab3c-8bc3-49ee-818a-6e6a18d3b5ea", 00:11:27.281 "strip_size_kb": 0, 00:11:27.281 "state": "online", 00:11:27.281 "raid_level": "raid1", 00:11:27.281 "superblock": true, 00:11:27.281 "num_base_bdevs": 2, 00:11:27.281 "num_base_bdevs_discovered": 2, 00:11:27.281 "num_base_bdevs_operational": 2, 00:11:27.281 "base_bdevs_list": [ 00:11:27.281 { 00:11:27.281 "name": "spare", 00:11:27.281 "uuid": "1a040d7c-411c-5d08-a321-572a448a0f02", 00:11:27.281 "is_configured": true, 00:11:27.281 "data_offset": 2048, 00:11:27.281 "data_size": 63488 00:11:27.281 }, 00:11:27.281 { 00:11:27.281 "name": "BaseBdev2", 00:11:27.281 "uuid": "24d8b898-7bfe-5546-a91e-a4dec6298c8a", 00:11:27.281 "is_configured": true, 00:11:27.281 "data_offset": 2048, 00:11:27.281 "data_size": 63488 00:11:27.281 } 00:11:27.281 ] 00:11:27.281 }' 00:11:27.281 18:42:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:27.281 18:42:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:27.851 18:42:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:27.851 18:42:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:27.851 18:42:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:27.851 18:42:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:27.851 18:42:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:27.851 18:42:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:27.851 18:42:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.851 18:42:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.851 18:42:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:27.851 18:42:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.851 18:42:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:27.851 "name": "raid_bdev1", 00:11:27.851 "uuid": "f6b4ab3c-8bc3-49ee-818a-6e6a18d3b5ea", 00:11:27.851 "strip_size_kb": 0, 00:11:27.851 "state": "online", 00:11:27.851 "raid_level": "raid1", 00:11:27.851 "superblock": true, 00:11:27.851 "num_base_bdevs": 2, 00:11:27.851 "num_base_bdevs_discovered": 2, 00:11:27.851 "num_base_bdevs_operational": 2, 00:11:27.851 "base_bdevs_list": [ 00:11:27.851 { 00:11:27.851 "name": "spare", 00:11:27.851 "uuid": "1a040d7c-411c-5d08-a321-572a448a0f02", 00:11:27.851 "is_configured": true, 00:11:27.851 "data_offset": 2048, 00:11:27.851 "data_size": 63488 00:11:27.851 }, 00:11:27.851 { 00:11:27.851 "name": "BaseBdev2", 00:11:27.851 "uuid": "24d8b898-7bfe-5546-a91e-a4dec6298c8a", 00:11:27.851 "is_configured": true, 00:11:27.851 "data_offset": 2048, 00:11:27.851 "data_size": 63488 00:11:27.851 } 00:11:27.851 ] 00:11:27.851 }' 00:11:27.851 18:42:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:27.851 18:42:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:27.851 18:42:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:27.851 18:42:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:27.851 18:42:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.851 18:42:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.851 18:42:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:27.851 18:42:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:11:27.851 18:42:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.851 18:42:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:11:27.851 18:42:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:11:27.851 18:42:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.851 18:42:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:27.851 [2024-12-15 18:42:28.172956] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:27.851 18:42:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.851 18:42:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:27.851 18:42:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:27.851 18:42:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:27.851 18:42:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:27.851 18:42:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:27.851 18:42:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:27.851 18:42:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:27.851 18:42:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:27.851 18:42:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:27.851 18:42:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:27.851 18:42:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.851 18:42:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:27.851 18:42:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.851 18:42:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:27.851 18:42:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.851 18:42:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:27.851 "name": "raid_bdev1", 00:11:27.851 "uuid": "f6b4ab3c-8bc3-49ee-818a-6e6a18d3b5ea", 00:11:27.851 "strip_size_kb": 0, 00:11:27.851 "state": "online", 00:11:27.851 "raid_level": "raid1", 00:11:27.851 "superblock": true, 00:11:27.851 "num_base_bdevs": 2, 00:11:27.851 "num_base_bdevs_discovered": 1, 00:11:27.851 "num_base_bdevs_operational": 1, 00:11:27.851 "base_bdevs_list": [ 00:11:27.851 { 00:11:27.851 "name": null, 00:11:27.851 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:27.851 "is_configured": false, 00:11:27.851 "data_offset": 0, 00:11:27.851 "data_size": 63488 00:11:27.851 }, 00:11:27.851 { 00:11:27.851 "name": "BaseBdev2", 00:11:27.851 "uuid": "24d8b898-7bfe-5546-a91e-a4dec6298c8a", 00:11:27.851 "is_configured": true, 00:11:27.851 "data_offset": 2048, 00:11:27.851 "data_size": 63488 00:11:27.851 } 00:11:27.851 ] 00:11:27.851 }' 00:11:27.851 18:42:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:27.851 18:42:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:28.421 18:42:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:28.421 18:42:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.421 18:42:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:28.421 [2024-12-15 18:42:28.652492] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:28.421 [2024-12-15 18:42:28.652739] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:11:28.421 [2024-12-15 18:42:28.652810] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:11:28.421 [2024-12-15 18:42:28.652875] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:28.421 [2024-12-15 18:42:28.658090] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b000 00:11:28.421 18:42:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.421 18:42:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:11:28.421 [2024-12-15 18:42:28.660071] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:29.361 18:42:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:29.361 18:42:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:29.361 18:42:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:29.361 18:42:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:29.361 18:42:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:29.361 18:42:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.361 18:42:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:29.361 18:42:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.361 18:42:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:29.361 18:42:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.361 18:42:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:29.361 "name": "raid_bdev1", 00:11:29.361 "uuid": "f6b4ab3c-8bc3-49ee-818a-6e6a18d3b5ea", 00:11:29.361 "strip_size_kb": 0, 00:11:29.361 "state": "online", 00:11:29.361 "raid_level": "raid1", 00:11:29.361 "superblock": true, 00:11:29.361 "num_base_bdevs": 2, 00:11:29.361 "num_base_bdevs_discovered": 2, 00:11:29.361 "num_base_bdevs_operational": 2, 00:11:29.361 "process": { 00:11:29.361 "type": "rebuild", 00:11:29.361 "target": "spare", 00:11:29.361 "progress": { 00:11:29.361 "blocks": 20480, 00:11:29.361 "percent": 32 00:11:29.361 } 00:11:29.361 }, 00:11:29.361 "base_bdevs_list": [ 00:11:29.361 { 00:11:29.361 "name": "spare", 00:11:29.361 "uuid": "1a040d7c-411c-5d08-a321-572a448a0f02", 00:11:29.361 "is_configured": true, 00:11:29.361 "data_offset": 2048, 00:11:29.361 "data_size": 63488 00:11:29.361 }, 00:11:29.361 { 00:11:29.361 "name": "BaseBdev2", 00:11:29.361 "uuid": "24d8b898-7bfe-5546-a91e-a4dec6298c8a", 00:11:29.361 "is_configured": true, 00:11:29.361 "data_offset": 2048, 00:11:29.361 "data_size": 63488 00:11:29.361 } 00:11:29.361 ] 00:11:29.361 }' 00:11:29.361 18:42:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:29.361 18:42:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:29.361 18:42:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:29.621 18:42:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:29.621 18:42:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:11:29.621 18:42:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.621 18:42:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:29.621 [2024-12-15 18:42:29.816707] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:29.621 [2024-12-15 18:42:29.864547] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:11:29.621 [2024-12-15 18:42:29.864684] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:29.621 [2024-12-15 18:42:29.864707] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:29.621 [2024-12-15 18:42:29.864716] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:11:29.621 18:42:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.621 18:42:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:29.621 18:42:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:29.621 18:42:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:29.621 18:42:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:29.621 18:42:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:29.621 18:42:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:29.621 18:42:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:29.621 18:42:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:29.621 18:42:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:29.621 18:42:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:29.621 18:42:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.621 18:42:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.621 18:42:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:29.621 18:42:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:29.621 18:42:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.621 18:42:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:29.621 "name": "raid_bdev1", 00:11:29.621 "uuid": "f6b4ab3c-8bc3-49ee-818a-6e6a18d3b5ea", 00:11:29.621 "strip_size_kb": 0, 00:11:29.621 "state": "online", 00:11:29.621 "raid_level": "raid1", 00:11:29.621 "superblock": true, 00:11:29.621 "num_base_bdevs": 2, 00:11:29.621 "num_base_bdevs_discovered": 1, 00:11:29.621 "num_base_bdevs_operational": 1, 00:11:29.621 "base_bdevs_list": [ 00:11:29.621 { 00:11:29.621 "name": null, 00:11:29.621 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:29.621 "is_configured": false, 00:11:29.621 "data_offset": 0, 00:11:29.621 "data_size": 63488 00:11:29.621 }, 00:11:29.621 { 00:11:29.621 "name": "BaseBdev2", 00:11:29.621 "uuid": "24d8b898-7bfe-5546-a91e-a4dec6298c8a", 00:11:29.621 "is_configured": true, 00:11:29.621 "data_offset": 2048, 00:11:29.621 "data_size": 63488 00:11:29.621 } 00:11:29.621 ] 00:11:29.621 }' 00:11:29.621 18:42:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:29.621 18:42:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:30.189 18:42:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:30.189 18:42:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.189 18:42:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:30.189 [2024-12-15 18:42:30.325186] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:30.189 [2024-12-15 18:42:30.325316] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:30.189 [2024-12-15 18:42:30.325384] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:11:30.189 [2024-12-15 18:42:30.325424] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:30.189 [2024-12-15 18:42:30.325940] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:30.189 [2024-12-15 18:42:30.326001] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:30.189 [2024-12-15 18:42:30.326110] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:11:30.189 [2024-12-15 18:42:30.326125] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:11:30.189 [2024-12-15 18:42:30.326149] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:11:30.189 [2024-12-15 18:42:30.326174] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:30.189 spare 00:11:30.189 [2024-12-15 18:42:30.331447] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b0d0 00:11:30.189 18:42:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.189 18:42:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:11:30.189 [2024-12-15 18:42:30.333423] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:31.130 18:42:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:31.130 18:42:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:31.130 18:42:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:31.130 18:42:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:31.130 18:42:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:31.130 18:42:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:31.130 18:42:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.130 18:42:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:31.130 18:42:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:31.130 18:42:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.130 18:42:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:31.130 "name": "raid_bdev1", 00:11:31.130 "uuid": "f6b4ab3c-8bc3-49ee-818a-6e6a18d3b5ea", 00:11:31.130 "strip_size_kb": 0, 00:11:31.130 "state": "online", 00:11:31.130 "raid_level": "raid1", 00:11:31.130 "superblock": true, 00:11:31.130 "num_base_bdevs": 2, 00:11:31.130 "num_base_bdevs_discovered": 2, 00:11:31.130 "num_base_bdevs_operational": 2, 00:11:31.130 "process": { 00:11:31.130 "type": "rebuild", 00:11:31.130 "target": "spare", 00:11:31.130 "progress": { 00:11:31.130 "blocks": 20480, 00:11:31.130 "percent": 32 00:11:31.130 } 00:11:31.130 }, 00:11:31.130 "base_bdevs_list": [ 00:11:31.130 { 00:11:31.130 "name": "spare", 00:11:31.130 "uuid": "1a040d7c-411c-5d08-a321-572a448a0f02", 00:11:31.130 "is_configured": true, 00:11:31.130 "data_offset": 2048, 00:11:31.130 "data_size": 63488 00:11:31.130 }, 00:11:31.130 { 00:11:31.130 "name": "BaseBdev2", 00:11:31.130 "uuid": "24d8b898-7bfe-5546-a91e-a4dec6298c8a", 00:11:31.130 "is_configured": true, 00:11:31.130 "data_offset": 2048, 00:11:31.130 "data_size": 63488 00:11:31.130 } 00:11:31.130 ] 00:11:31.130 }' 00:11:31.130 18:42:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:31.130 18:42:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:31.130 18:42:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:31.130 18:42:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:31.130 18:42:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:11:31.130 18:42:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.130 18:42:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:31.130 [2024-12-15 18:42:31.497936] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:31.130 [2024-12-15 18:42:31.537775] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:11:31.130 [2024-12-15 18:42:31.537927] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:31.130 [2024-12-15 18:42:31.537949] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:31.130 [2024-12-15 18:42:31.537961] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:11:31.130 18:42:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.130 18:42:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:31.130 18:42:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:31.130 18:42:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:31.130 18:42:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:31.130 18:42:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:31.130 18:42:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:31.130 18:42:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:31.130 18:42:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:31.130 18:42:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:31.130 18:42:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:31.130 18:42:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:31.130 18:42:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:31.130 18:42:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.130 18:42:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:31.390 18:42:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.390 18:42:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:31.390 "name": "raid_bdev1", 00:11:31.390 "uuid": "f6b4ab3c-8bc3-49ee-818a-6e6a18d3b5ea", 00:11:31.390 "strip_size_kb": 0, 00:11:31.390 "state": "online", 00:11:31.390 "raid_level": "raid1", 00:11:31.390 "superblock": true, 00:11:31.390 "num_base_bdevs": 2, 00:11:31.390 "num_base_bdevs_discovered": 1, 00:11:31.390 "num_base_bdevs_operational": 1, 00:11:31.390 "base_bdevs_list": [ 00:11:31.390 { 00:11:31.390 "name": null, 00:11:31.390 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:31.390 "is_configured": false, 00:11:31.390 "data_offset": 0, 00:11:31.390 "data_size": 63488 00:11:31.390 }, 00:11:31.390 { 00:11:31.390 "name": "BaseBdev2", 00:11:31.390 "uuid": "24d8b898-7bfe-5546-a91e-a4dec6298c8a", 00:11:31.390 "is_configured": true, 00:11:31.390 "data_offset": 2048, 00:11:31.390 "data_size": 63488 00:11:31.390 } 00:11:31.390 ] 00:11:31.390 }' 00:11:31.390 18:42:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:31.390 18:42:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:31.650 18:42:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:31.650 18:42:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:31.650 18:42:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:31.650 18:42:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:31.650 18:42:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:31.650 18:42:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:31.650 18:42:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:31.650 18:42:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.650 18:42:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:31.650 18:42:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.650 18:42:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:31.650 "name": "raid_bdev1", 00:11:31.650 "uuid": "f6b4ab3c-8bc3-49ee-818a-6e6a18d3b5ea", 00:11:31.650 "strip_size_kb": 0, 00:11:31.650 "state": "online", 00:11:31.650 "raid_level": "raid1", 00:11:31.650 "superblock": true, 00:11:31.650 "num_base_bdevs": 2, 00:11:31.650 "num_base_bdevs_discovered": 1, 00:11:31.650 "num_base_bdevs_operational": 1, 00:11:31.650 "base_bdevs_list": [ 00:11:31.650 { 00:11:31.650 "name": null, 00:11:31.650 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:31.650 "is_configured": false, 00:11:31.650 "data_offset": 0, 00:11:31.650 "data_size": 63488 00:11:31.650 }, 00:11:31.650 { 00:11:31.650 "name": "BaseBdev2", 00:11:31.650 "uuid": "24d8b898-7bfe-5546-a91e-a4dec6298c8a", 00:11:31.650 "is_configured": true, 00:11:31.650 "data_offset": 2048, 00:11:31.650 "data_size": 63488 00:11:31.650 } 00:11:31.650 ] 00:11:31.650 }' 00:11:31.650 18:42:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:31.910 18:42:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:31.910 18:42:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:31.910 18:42:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:31.910 18:42:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:11:31.910 18:42:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.910 18:42:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:31.910 18:42:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.910 18:42:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:11:31.910 18:42:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.910 18:42:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:31.910 [2024-12-15 18:42:32.154101] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:11:31.911 [2024-12-15 18:42:32.154210] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:31.911 [2024-12-15 18:42:32.154237] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:11:31.911 [2024-12-15 18:42:32.154249] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:31.911 [2024-12-15 18:42:32.154695] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:31.911 [2024-12-15 18:42:32.154718] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:31.911 [2024-12-15 18:42:32.154792] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:11:31.911 [2024-12-15 18:42:32.154895] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:11:31.911 [2024-12-15 18:42:32.154945] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:11:31.911 [2024-12-15 18:42:32.154994] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:11:31.911 BaseBdev1 00:11:31.911 18:42:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.911 18:42:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:11:32.851 18:42:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:32.851 18:42:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:32.851 18:42:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:32.851 18:42:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:32.851 18:42:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:32.851 18:42:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:32.851 18:42:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:32.851 18:42:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:32.851 18:42:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:32.852 18:42:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:32.852 18:42:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:32.852 18:42:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:32.852 18:42:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.852 18:42:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:32.852 18:42:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.852 18:42:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:32.852 "name": "raid_bdev1", 00:11:32.852 "uuid": "f6b4ab3c-8bc3-49ee-818a-6e6a18d3b5ea", 00:11:32.852 "strip_size_kb": 0, 00:11:32.852 "state": "online", 00:11:32.852 "raid_level": "raid1", 00:11:32.852 "superblock": true, 00:11:32.852 "num_base_bdevs": 2, 00:11:32.852 "num_base_bdevs_discovered": 1, 00:11:32.852 "num_base_bdevs_operational": 1, 00:11:32.852 "base_bdevs_list": [ 00:11:32.852 { 00:11:32.852 "name": null, 00:11:32.852 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:32.852 "is_configured": false, 00:11:32.852 "data_offset": 0, 00:11:32.852 "data_size": 63488 00:11:32.852 }, 00:11:32.852 { 00:11:32.852 "name": "BaseBdev2", 00:11:32.852 "uuid": "24d8b898-7bfe-5546-a91e-a4dec6298c8a", 00:11:32.852 "is_configured": true, 00:11:32.852 "data_offset": 2048, 00:11:32.852 "data_size": 63488 00:11:32.852 } 00:11:32.852 ] 00:11:32.852 }' 00:11:32.852 18:42:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:32.852 18:42:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:33.419 18:42:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:33.419 18:42:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:33.419 18:42:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:33.419 18:42:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:33.419 18:42:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:33.419 18:42:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:33.419 18:42:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:33.419 18:42:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.419 18:42:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:33.419 18:42:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.419 18:42:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:33.419 "name": "raid_bdev1", 00:11:33.419 "uuid": "f6b4ab3c-8bc3-49ee-818a-6e6a18d3b5ea", 00:11:33.419 "strip_size_kb": 0, 00:11:33.419 "state": "online", 00:11:33.419 "raid_level": "raid1", 00:11:33.419 "superblock": true, 00:11:33.419 "num_base_bdevs": 2, 00:11:33.419 "num_base_bdevs_discovered": 1, 00:11:33.419 "num_base_bdevs_operational": 1, 00:11:33.419 "base_bdevs_list": [ 00:11:33.419 { 00:11:33.419 "name": null, 00:11:33.419 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:33.419 "is_configured": false, 00:11:33.419 "data_offset": 0, 00:11:33.419 "data_size": 63488 00:11:33.419 }, 00:11:33.419 { 00:11:33.419 "name": "BaseBdev2", 00:11:33.419 "uuid": "24d8b898-7bfe-5546-a91e-a4dec6298c8a", 00:11:33.419 "is_configured": true, 00:11:33.419 "data_offset": 2048, 00:11:33.419 "data_size": 63488 00:11:33.419 } 00:11:33.419 ] 00:11:33.419 }' 00:11:33.419 18:42:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:33.419 18:42:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:33.419 18:42:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:33.419 18:42:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:33.419 18:42:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:11:33.419 18:42:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:11:33.419 18:42:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:11:33.419 18:42:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:11:33.419 18:42:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:33.419 18:42:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:11:33.419 18:42:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:33.419 18:42:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:11:33.419 18:42:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.419 18:42:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:33.419 [2024-12-15 18:42:33.819998] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:33.420 [2024-12-15 18:42:33.820199] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:11:33.420 [2024-12-15 18:42:33.820251] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:11:33.420 request: 00:11:33.420 { 00:11:33.420 "base_bdev": "BaseBdev1", 00:11:33.420 "raid_bdev": "raid_bdev1", 00:11:33.420 "method": "bdev_raid_add_base_bdev", 00:11:33.420 "req_id": 1 00:11:33.420 } 00:11:33.420 Got JSON-RPC error response 00:11:33.420 response: 00:11:33.420 { 00:11:33.420 "code": -22, 00:11:33.420 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:11:33.420 } 00:11:33.420 18:42:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:11:33.420 18:42:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:11:33.420 18:42:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:33.420 18:42:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:33.420 18:42:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:33.420 18:42:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:11:34.798 18:42:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:34.798 18:42:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:34.798 18:42:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:34.798 18:42:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:34.799 18:42:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:34.799 18:42:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:34.799 18:42:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:34.799 18:42:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:34.799 18:42:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:34.799 18:42:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:34.799 18:42:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.799 18:42:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:34.799 18:42:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.799 18:42:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:34.799 18:42:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.799 18:42:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:34.799 "name": "raid_bdev1", 00:11:34.799 "uuid": "f6b4ab3c-8bc3-49ee-818a-6e6a18d3b5ea", 00:11:34.799 "strip_size_kb": 0, 00:11:34.799 "state": "online", 00:11:34.799 "raid_level": "raid1", 00:11:34.799 "superblock": true, 00:11:34.799 "num_base_bdevs": 2, 00:11:34.799 "num_base_bdevs_discovered": 1, 00:11:34.799 "num_base_bdevs_operational": 1, 00:11:34.799 "base_bdevs_list": [ 00:11:34.799 { 00:11:34.799 "name": null, 00:11:34.799 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:34.799 "is_configured": false, 00:11:34.799 "data_offset": 0, 00:11:34.799 "data_size": 63488 00:11:34.799 }, 00:11:34.799 { 00:11:34.799 "name": "BaseBdev2", 00:11:34.799 "uuid": "24d8b898-7bfe-5546-a91e-a4dec6298c8a", 00:11:34.799 "is_configured": true, 00:11:34.799 "data_offset": 2048, 00:11:34.799 "data_size": 63488 00:11:34.799 } 00:11:34.799 ] 00:11:34.799 }' 00:11:34.799 18:42:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:34.799 18:42:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:35.066 18:42:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:35.066 18:42:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:35.066 18:42:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:35.066 18:42:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:35.066 18:42:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:35.066 18:42:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:35.066 18:42:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.066 18:42:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:35.066 18:42:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:35.066 18:42:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.066 18:42:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:35.066 "name": "raid_bdev1", 00:11:35.066 "uuid": "f6b4ab3c-8bc3-49ee-818a-6e6a18d3b5ea", 00:11:35.066 "strip_size_kb": 0, 00:11:35.066 "state": "online", 00:11:35.066 "raid_level": "raid1", 00:11:35.066 "superblock": true, 00:11:35.066 "num_base_bdevs": 2, 00:11:35.066 "num_base_bdevs_discovered": 1, 00:11:35.066 "num_base_bdevs_operational": 1, 00:11:35.066 "base_bdevs_list": [ 00:11:35.066 { 00:11:35.066 "name": null, 00:11:35.066 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:35.066 "is_configured": false, 00:11:35.066 "data_offset": 0, 00:11:35.066 "data_size": 63488 00:11:35.066 }, 00:11:35.066 { 00:11:35.066 "name": "BaseBdev2", 00:11:35.066 "uuid": "24d8b898-7bfe-5546-a91e-a4dec6298c8a", 00:11:35.066 "is_configured": true, 00:11:35.066 "data_offset": 2048, 00:11:35.066 "data_size": 63488 00:11:35.066 } 00:11:35.066 ] 00:11:35.066 }' 00:11:35.066 18:42:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:35.066 18:42:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:35.066 18:42:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:35.066 18:42:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:35.066 18:42:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 89423 00:11:35.066 18:42:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 89423 ']' 00:11:35.066 18:42:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 89423 00:11:35.066 18:42:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:11:35.066 18:42:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:35.066 18:42:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89423 00:11:35.066 killing process with pid 89423 00:11:35.066 Received shutdown signal, test time was about 18.178734 seconds 00:11:35.066 00:11:35.066 Latency(us) 00:11:35.066 [2024-12-15T18:42:35.507Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:35.066 [2024-12-15T18:42:35.507Z] =================================================================================================================== 00:11:35.066 [2024-12-15T18:42:35.507Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:35.066 18:42:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:35.066 18:42:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:35.066 18:42:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89423' 00:11:35.066 18:42:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 89423 00:11:35.066 [2024-12-15 18:42:35.447501] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:35.066 [2024-12-15 18:42:35.447642] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:35.066 18:42:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 89423 00:11:35.066 [2024-12-15 18:42:35.447695] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:35.067 [2024-12-15 18:42:35.447704] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state offline 00:11:35.067 [2024-12-15 18:42:35.475081] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:35.326 18:42:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:11:35.326 00:11:35.326 real 0m20.116s 00:11:35.326 user 0m26.575s 00:11:35.326 sys 0m2.365s 00:11:35.326 ************************************ 00:11:35.326 END TEST raid_rebuild_test_sb_io 00:11:35.326 ************************************ 00:11:35.326 18:42:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:35.326 18:42:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:35.326 18:42:35 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:11:35.326 18:42:35 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:11:35.326 18:42:35 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:11:35.326 18:42:35 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:35.326 18:42:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:35.586 ************************************ 00:11:35.586 START TEST raid_rebuild_test 00:11:35.586 ************************************ 00:11:35.586 18:42:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false false true 00:11:35.586 18:42:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:11:35.586 18:42:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:11:35.586 18:42:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:11:35.586 18:42:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:11:35.586 18:42:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:11:35.586 18:42:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:11:35.586 18:42:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:35.586 18:42:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:11:35.586 18:42:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:35.586 18:42:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:35.586 18:42:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:11:35.586 18:42:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:35.586 18:42:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:35.586 18:42:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:11:35.586 18:42:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:35.586 18:42:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:35.586 18:42:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:11:35.586 18:42:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:35.586 18:42:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:35.586 18:42:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:35.586 18:42:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:11:35.586 18:42:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:11:35.586 18:42:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:11:35.586 18:42:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:11:35.586 18:42:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:11:35.586 18:42:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:11:35.586 18:42:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:11:35.586 18:42:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:11:35.586 18:42:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:11:35.586 18:42:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=90123 00:11:35.586 18:42:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 90123 00:11:35.586 18:42:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:11:35.586 18:42:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 90123 ']' 00:11:35.586 18:42:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:35.586 18:42:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:35.586 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:35.586 18:42:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:35.586 18:42:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:35.586 18:42:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.586 I/O size of 3145728 is greater than zero copy threshold (65536). 00:11:35.586 Zero copy mechanism will not be used. 00:11:35.586 [2024-12-15 18:42:35.863310] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:11:35.586 [2024-12-15 18:42:35.863438] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90123 ] 00:11:35.846 [2024-12-15 18:42:36.036381] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:35.846 [2024-12-15 18:42:36.062544] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:11:35.846 [2024-12-15 18:42:36.105252] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:35.846 [2024-12-15 18:42:36.105288] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:36.414 18:42:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:36.414 18:42:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:11:36.414 18:42:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:36.414 18:42:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:36.414 18:42:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.414 18:42:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.414 BaseBdev1_malloc 00:11:36.414 18:42:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.414 18:42:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:11:36.414 18:42:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.414 18:42:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.414 [2024-12-15 18:42:36.716919] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:11:36.414 [2024-12-15 18:42:36.716987] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:36.414 [2024-12-15 18:42:36.717018] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:36.414 [2024-12-15 18:42:36.717030] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:36.414 [2024-12-15 18:42:36.719203] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:36.414 [2024-12-15 18:42:36.719242] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:36.414 BaseBdev1 00:11:36.414 18:42:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.414 18:42:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:36.415 18:42:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:36.415 18:42:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.415 18:42:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.415 BaseBdev2_malloc 00:11:36.415 18:42:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.415 18:42:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:11:36.415 18:42:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.415 18:42:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.415 [2024-12-15 18:42:36.745533] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:11:36.415 [2024-12-15 18:42:36.745586] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:36.415 [2024-12-15 18:42:36.745609] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:36.415 [2024-12-15 18:42:36.745617] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:36.415 [2024-12-15 18:42:36.747661] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:36.415 [2024-12-15 18:42:36.747696] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:36.415 BaseBdev2 00:11:36.415 18:42:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.415 18:42:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:36.415 18:42:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:36.415 18:42:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.415 18:42:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.415 BaseBdev3_malloc 00:11:36.415 18:42:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.415 18:42:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:11:36.415 18:42:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.415 18:42:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.415 [2024-12-15 18:42:36.774099] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:11:36.415 [2024-12-15 18:42:36.774149] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:36.415 [2024-12-15 18:42:36.774175] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:36.415 [2024-12-15 18:42:36.774183] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:36.415 [2024-12-15 18:42:36.776228] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:36.415 [2024-12-15 18:42:36.776264] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:36.415 BaseBdev3 00:11:36.415 18:42:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.415 18:42:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:36.415 18:42:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:36.415 18:42:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.415 18:42:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.415 BaseBdev4_malloc 00:11:36.415 18:42:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.415 18:42:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:11:36.415 18:42:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.415 18:42:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.415 [2024-12-15 18:42:36.813710] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:11:36.415 [2024-12-15 18:42:36.813767] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:36.415 [2024-12-15 18:42:36.813795] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:11:36.415 [2024-12-15 18:42:36.813818] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:36.415 [2024-12-15 18:42:36.815903] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:36.415 [2024-12-15 18:42:36.815995] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:36.415 BaseBdev4 00:11:36.415 18:42:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.415 18:42:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:11:36.415 18:42:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.415 18:42:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.415 spare_malloc 00:11:36.415 18:42:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.415 18:42:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:11:36.415 18:42:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.415 18:42:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.415 spare_delay 00:11:36.415 18:42:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.415 18:42:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:36.415 18:42:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.415 18:42:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.674 [2024-12-15 18:42:36.854331] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:36.674 [2024-12-15 18:42:36.854380] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:36.674 [2024-12-15 18:42:36.854400] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:11:36.674 [2024-12-15 18:42:36.854409] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:36.674 [2024-12-15 18:42:36.856542] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:36.674 [2024-12-15 18:42:36.856578] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:36.674 spare 00:11:36.674 18:42:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.674 18:42:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:11:36.674 18:42:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.674 18:42:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.674 [2024-12-15 18:42:36.866376] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:36.674 [2024-12-15 18:42:36.868198] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:36.674 [2024-12-15 18:42:36.868264] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:36.674 [2024-12-15 18:42:36.868305] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:36.674 [2024-12-15 18:42:36.868383] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:11:36.674 [2024-12-15 18:42:36.868397] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:36.674 [2024-12-15 18:42:36.868676] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:36.674 [2024-12-15 18:42:36.868806] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:11:36.674 [2024-12-15 18:42:36.868839] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:11:36.674 [2024-12-15 18:42:36.868958] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:36.674 18:42:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.674 18:42:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:36.674 18:42:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:36.674 18:42:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:36.674 18:42:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:36.674 18:42:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:36.674 18:42:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:36.674 18:42:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:36.674 18:42:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:36.674 18:42:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:36.674 18:42:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:36.674 18:42:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.675 18:42:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:36.675 18:42:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.675 18:42:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.675 18:42:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.675 18:42:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:36.675 "name": "raid_bdev1", 00:11:36.675 "uuid": "1fef49c7-6cfb-4e78-965f-be254540baf1", 00:11:36.675 "strip_size_kb": 0, 00:11:36.675 "state": "online", 00:11:36.675 "raid_level": "raid1", 00:11:36.675 "superblock": false, 00:11:36.675 "num_base_bdevs": 4, 00:11:36.675 "num_base_bdevs_discovered": 4, 00:11:36.675 "num_base_bdevs_operational": 4, 00:11:36.675 "base_bdevs_list": [ 00:11:36.675 { 00:11:36.675 "name": "BaseBdev1", 00:11:36.675 "uuid": "7fbf5a37-12f3-5f63-a746-16597dfd1c16", 00:11:36.675 "is_configured": true, 00:11:36.675 "data_offset": 0, 00:11:36.675 "data_size": 65536 00:11:36.675 }, 00:11:36.675 { 00:11:36.675 "name": "BaseBdev2", 00:11:36.675 "uuid": "da84aa30-855a-5c0c-9025-63c84830d005", 00:11:36.675 "is_configured": true, 00:11:36.675 "data_offset": 0, 00:11:36.675 "data_size": 65536 00:11:36.675 }, 00:11:36.675 { 00:11:36.675 "name": "BaseBdev3", 00:11:36.675 "uuid": "2f1f0ff4-3f84-5923-9fb8-33a2f5c3b595", 00:11:36.675 "is_configured": true, 00:11:36.675 "data_offset": 0, 00:11:36.675 "data_size": 65536 00:11:36.675 }, 00:11:36.675 { 00:11:36.675 "name": "BaseBdev4", 00:11:36.675 "uuid": "293f6dc5-3977-58a9-8bd8-4ead1f81cd1f", 00:11:36.675 "is_configured": true, 00:11:36.675 "data_offset": 0, 00:11:36.675 "data_size": 65536 00:11:36.675 } 00:11:36.675 ] 00:11:36.675 }' 00:11:36.675 18:42:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:36.675 18:42:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.934 18:42:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:11:36.934 18:42:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:36.934 18:42:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.934 18:42:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.934 [2024-12-15 18:42:37.333959] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:36.934 18:42:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.934 18:42:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:11:36.934 18:42:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.934 18:42:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.934 18:42:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.934 18:42:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:11:37.194 18:42:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.194 18:42:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:11:37.194 18:42:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:11:37.194 18:42:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:11:37.194 18:42:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:11:37.194 18:42:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:11:37.194 18:42:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:37.194 18:42:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:11:37.194 18:42:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:37.194 18:42:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:11:37.194 18:42:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:37.194 18:42:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:11:37.194 18:42:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:37.194 18:42:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:37.194 18:42:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:11:37.194 [2024-12-15 18:42:37.609163] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:37.194 /dev/nbd0 00:11:37.453 18:42:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:37.453 18:42:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:37.453 18:42:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:11:37.453 18:42:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:11:37.453 18:42:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:37.453 18:42:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:37.453 18:42:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:11:37.453 18:42:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:11:37.453 18:42:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:37.453 18:42:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:37.453 18:42:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:37.453 1+0 records in 00:11:37.453 1+0 records out 00:11:37.453 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000335055 s, 12.2 MB/s 00:11:37.453 18:42:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:37.453 18:42:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:11:37.453 18:42:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:37.453 18:42:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:37.453 18:42:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:11:37.453 18:42:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:37.453 18:42:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:37.453 18:42:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:11:37.453 18:42:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:11:37.453 18:42:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:11:42.734 65536+0 records in 00:11:42.734 65536+0 records out 00:11:42.734 33554432 bytes (34 MB, 32 MiB) copied, 5.3294 s, 6.3 MB/s 00:11:42.734 18:42:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:11:42.734 18:42:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:42.734 18:42:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:11:42.734 18:42:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:42.734 18:42:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:11:42.734 18:42:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:42.734 18:42:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:11:43.004 [2024-12-15 18:42:43.204137] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:43.004 18:42:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:43.004 18:42:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:43.004 18:42:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:43.004 18:42:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:43.004 18:42:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:43.004 18:42:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:43.004 18:42:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:11:43.004 18:42:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:11:43.004 18:42:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:11:43.004 18:42:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.004 18:42:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.004 [2024-12-15 18:42:43.242791] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:43.004 18:42:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.004 18:42:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:43.004 18:42:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:43.004 18:42:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:43.004 18:42:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:43.004 18:42:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:43.004 18:42:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:43.004 18:42:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:43.004 18:42:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:43.004 18:42:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:43.004 18:42:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:43.004 18:42:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:43.004 18:42:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:43.004 18:42:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.004 18:42:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.004 18:42:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.004 18:42:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:43.004 "name": "raid_bdev1", 00:11:43.004 "uuid": "1fef49c7-6cfb-4e78-965f-be254540baf1", 00:11:43.004 "strip_size_kb": 0, 00:11:43.004 "state": "online", 00:11:43.004 "raid_level": "raid1", 00:11:43.004 "superblock": false, 00:11:43.004 "num_base_bdevs": 4, 00:11:43.004 "num_base_bdevs_discovered": 3, 00:11:43.004 "num_base_bdevs_operational": 3, 00:11:43.004 "base_bdevs_list": [ 00:11:43.004 { 00:11:43.004 "name": null, 00:11:43.004 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:43.004 "is_configured": false, 00:11:43.004 "data_offset": 0, 00:11:43.004 "data_size": 65536 00:11:43.004 }, 00:11:43.004 { 00:11:43.004 "name": "BaseBdev2", 00:11:43.004 "uuid": "da84aa30-855a-5c0c-9025-63c84830d005", 00:11:43.004 "is_configured": true, 00:11:43.004 "data_offset": 0, 00:11:43.004 "data_size": 65536 00:11:43.004 }, 00:11:43.004 { 00:11:43.004 "name": "BaseBdev3", 00:11:43.004 "uuid": "2f1f0ff4-3f84-5923-9fb8-33a2f5c3b595", 00:11:43.004 "is_configured": true, 00:11:43.004 "data_offset": 0, 00:11:43.004 "data_size": 65536 00:11:43.004 }, 00:11:43.004 { 00:11:43.004 "name": "BaseBdev4", 00:11:43.004 "uuid": "293f6dc5-3977-58a9-8bd8-4ead1f81cd1f", 00:11:43.004 "is_configured": true, 00:11:43.004 "data_offset": 0, 00:11:43.004 "data_size": 65536 00:11:43.004 } 00:11:43.004 ] 00:11:43.004 }' 00:11:43.004 18:42:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:43.004 18:42:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.264 18:42:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:43.264 18:42:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.264 18:42:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.264 [2024-12-15 18:42:43.630172] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:43.264 [2024-12-15 18:42:43.634324] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09bd0 00:11:43.264 18:42:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.264 18:42:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:11:43.264 [2024-12-15 18:42:43.636208] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:44.644 18:42:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:44.644 18:42:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:44.644 18:42:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:44.644 18:42:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:44.644 18:42:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:44.644 18:42:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:44.644 18:42:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:44.644 18:42:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.644 18:42:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.644 18:42:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.644 18:42:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:44.644 "name": "raid_bdev1", 00:11:44.644 "uuid": "1fef49c7-6cfb-4e78-965f-be254540baf1", 00:11:44.644 "strip_size_kb": 0, 00:11:44.644 "state": "online", 00:11:44.644 "raid_level": "raid1", 00:11:44.644 "superblock": false, 00:11:44.644 "num_base_bdevs": 4, 00:11:44.644 "num_base_bdevs_discovered": 4, 00:11:44.644 "num_base_bdevs_operational": 4, 00:11:44.644 "process": { 00:11:44.644 "type": "rebuild", 00:11:44.644 "target": "spare", 00:11:44.644 "progress": { 00:11:44.644 "blocks": 20480, 00:11:44.644 "percent": 31 00:11:44.644 } 00:11:44.644 }, 00:11:44.644 "base_bdevs_list": [ 00:11:44.644 { 00:11:44.644 "name": "spare", 00:11:44.644 "uuid": "5eaa8016-5c76-5609-8c85-be3aa50880d7", 00:11:44.644 "is_configured": true, 00:11:44.644 "data_offset": 0, 00:11:44.644 "data_size": 65536 00:11:44.644 }, 00:11:44.644 { 00:11:44.644 "name": "BaseBdev2", 00:11:44.644 "uuid": "da84aa30-855a-5c0c-9025-63c84830d005", 00:11:44.644 "is_configured": true, 00:11:44.644 "data_offset": 0, 00:11:44.644 "data_size": 65536 00:11:44.644 }, 00:11:44.644 { 00:11:44.644 "name": "BaseBdev3", 00:11:44.644 "uuid": "2f1f0ff4-3f84-5923-9fb8-33a2f5c3b595", 00:11:44.644 "is_configured": true, 00:11:44.645 "data_offset": 0, 00:11:44.645 "data_size": 65536 00:11:44.645 }, 00:11:44.645 { 00:11:44.645 "name": "BaseBdev4", 00:11:44.645 "uuid": "293f6dc5-3977-58a9-8bd8-4ead1f81cd1f", 00:11:44.645 "is_configured": true, 00:11:44.645 "data_offset": 0, 00:11:44.645 "data_size": 65536 00:11:44.645 } 00:11:44.645 ] 00:11:44.645 }' 00:11:44.645 18:42:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:44.645 18:42:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:44.645 18:42:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:44.645 18:42:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:44.645 18:42:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:11:44.645 18:42:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.645 18:42:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.645 [2024-12-15 18:42:44.777187] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:44.645 [2024-12-15 18:42:44.841100] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:11:44.645 [2024-12-15 18:42:44.841234] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:44.645 [2024-12-15 18:42:44.841261] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:44.645 [2024-12-15 18:42:44.841270] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:11:44.645 18:42:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.645 18:42:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:44.645 18:42:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:44.645 18:42:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:44.645 18:42:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:44.645 18:42:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:44.645 18:42:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:44.645 18:42:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:44.645 18:42:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:44.645 18:42:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:44.645 18:42:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:44.645 18:42:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:44.645 18:42:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:44.645 18:42:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.645 18:42:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.645 18:42:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.645 18:42:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:44.645 "name": "raid_bdev1", 00:11:44.645 "uuid": "1fef49c7-6cfb-4e78-965f-be254540baf1", 00:11:44.645 "strip_size_kb": 0, 00:11:44.645 "state": "online", 00:11:44.645 "raid_level": "raid1", 00:11:44.645 "superblock": false, 00:11:44.645 "num_base_bdevs": 4, 00:11:44.645 "num_base_bdevs_discovered": 3, 00:11:44.645 "num_base_bdevs_operational": 3, 00:11:44.645 "base_bdevs_list": [ 00:11:44.645 { 00:11:44.645 "name": null, 00:11:44.645 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:44.645 "is_configured": false, 00:11:44.645 "data_offset": 0, 00:11:44.645 "data_size": 65536 00:11:44.645 }, 00:11:44.645 { 00:11:44.645 "name": "BaseBdev2", 00:11:44.645 "uuid": "da84aa30-855a-5c0c-9025-63c84830d005", 00:11:44.645 "is_configured": true, 00:11:44.645 "data_offset": 0, 00:11:44.645 "data_size": 65536 00:11:44.645 }, 00:11:44.645 { 00:11:44.645 "name": "BaseBdev3", 00:11:44.645 "uuid": "2f1f0ff4-3f84-5923-9fb8-33a2f5c3b595", 00:11:44.645 "is_configured": true, 00:11:44.645 "data_offset": 0, 00:11:44.645 "data_size": 65536 00:11:44.645 }, 00:11:44.645 { 00:11:44.645 "name": "BaseBdev4", 00:11:44.645 "uuid": "293f6dc5-3977-58a9-8bd8-4ead1f81cd1f", 00:11:44.645 "is_configured": true, 00:11:44.645 "data_offset": 0, 00:11:44.645 "data_size": 65536 00:11:44.645 } 00:11:44.645 ] 00:11:44.645 }' 00:11:44.645 18:42:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:44.645 18:42:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.905 18:42:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:44.905 18:42:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:44.905 18:42:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:44.905 18:42:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:44.905 18:42:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:44.905 18:42:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:44.905 18:42:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:44.905 18:42:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.905 18:42:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.905 18:42:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.905 18:42:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:44.905 "name": "raid_bdev1", 00:11:44.905 "uuid": "1fef49c7-6cfb-4e78-965f-be254540baf1", 00:11:44.905 "strip_size_kb": 0, 00:11:44.905 "state": "online", 00:11:44.905 "raid_level": "raid1", 00:11:44.905 "superblock": false, 00:11:44.905 "num_base_bdevs": 4, 00:11:44.905 "num_base_bdevs_discovered": 3, 00:11:44.905 "num_base_bdevs_operational": 3, 00:11:44.905 "base_bdevs_list": [ 00:11:44.905 { 00:11:44.905 "name": null, 00:11:44.905 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:44.905 "is_configured": false, 00:11:44.905 "data_offset": 0, 00:11:44.905 "data_size": 65536 00:11:44.905 }, 00:11:44.905 { 00:11:44.905 "name": "BaseBdev2", 00:11:44.905 "uuid": "da84aa30-855a-5c0c-9025-63c84830d005", 00:11:44.905 "is_configured": true, 00:11:44.905 "data_offset": 0, 00:11:44.905 "data_size": 65536 00:11:44.905 }, 00:11:44.905 { 00:11:44.905 "name": "BaseBdev3", 00:11:44.905 "uuid": "2f1f0ff4-3f84-5923-9fb8-33a2f5c3b595", 00:11:44.905 "is_configured": true, 00:11:44.905 "data_offset": 0, 00:11:44.905 "data_size": 65536 00:11:44.905 }, 00:11:44.905 { 00:11:44.905 "name": "BaseBdev4", 00:11:44.905 "uuid": "293f6dc5-3977-58a9-8bd8-4ead1f81cd1f", 00:11:44.905 "is_configured": true, 00:11:44.905 "data_offset": 0, 00:11:44.905 "data_size": 65536 00:11:44.905 } 00:11:44.905 ] 00:11:44.905 }' 00:11:45.164 18:42:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:45.164 18:42:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:45.164 18:42:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:45.164 18:42:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:45.164 18:42:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:45.164 18:42:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.164 18:42:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.164 [2024-12-15 18:42:45.436885] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:45.164 [2024-12-15 18:42:45.441094] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09ca0 00:11:45.164 18:42:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.164 18:42:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:11:45.164 [2024-12-15 18:42:45.443075] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:46.102 18:42:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:46.102 18:42:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:46.102 18:42:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:46.102 18:42:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:46.102 18:42:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:46.102 18:42:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.102 18:42:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:46.102 18:42:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.102 18:42:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.102 18:42:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.102 18:42:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:46.102 "name": "raid_bdev1", 00:11:46.102 "uuid": "1fef49c7-6cfb-4e78-965f-be254540baf1", 00:11:46.102 "strip_size_kb": 0, 00:11:46.102 "state": "online", 00:11:46.102 "raid_level": "raid1", 00:11:46.102 "superblock": false, 00:11:46.102 "num_base_bdevs": 4, 00:11:46.102 "num_base_bdevs_discovered": 4, 00:11:46.102 "num_base_bdevs_operational": 4, 00:11:46.102 "process": { 00:11:46.102 "type": "rebuild", 00:11:46.102 "target": "spare", 00:11:46.102 "progress": { 00:11:46.102 "blocks": 20480, 00:11:46.102 "percent": 31 00:11:46.102 } 00:11:46.102 }, 00:11:46.102 "base_bdevs_list": [ 00:11:46.102 { 00:11:46.102 "name": "spare", 00:11:46.102 "uuid": "5eaa8016-5c76-5609-8c85-be3aa50880d7", 00:11:46.102 "is_configured": true, 00:11:46.102 "data_offset": 0, 00:11:46.102 "data_size": 65536 00:11:46.102 }, 00:11:46.102 { 00:11:46.102 "name": "BaseBdev2", 00:11:46.102 "uuid": "da84aa30-855a-5c0c-9025-63c84830d005", 00:11:46.102 "is_configured": true, 00:11:46.102 "data_offset": 0, 00:11:46.102 "data_size": 65536 00:11:46.102 }, 00:11:46.102 { 00:11:46.102 "name": "BaseBdev3", 00:11:46.102 "uuid": "2f1f0ff4-3f84-5923-9fb8-33a2f5c3b595", 00:11:46.102 "is_configured": true, 00:11:46.102 "data_offset": 0, 00:11:46.102 "data_size": 65536 00:11:46.102 }, 00:11:46.102 { 00:11:46.102 "name": "BaseBdev4", 00:11:46.102 "uuid": "293f6dc5-3977-58a9-8bd8-4ead1f81cd1f", 00:11:46.102 "is_configured": true, 00:11:46.102 "data_offset": 0, 00:11:46.102 "data_size": 65536 00:11:46.102 } 00:11:46.102 ] 00:11:46.102 }' 00:11:46.102 18:42:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:46.362 18:42:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:46.362 18:42:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:46.362 18:42:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:46.362 18:42:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:11:46.362 18:42:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:11:46.362 18:42:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:11:46.362 18:42:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:11:46.362 18:42:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:46.362 18:42:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.362 18:42:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.362 [2024-12-15 18:42:46.599647] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:46.362 [2024-12-15 18:42:46.647329] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d09ca0 00:11:46.362 18:42:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.362 18:42:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:11:46.362 18:42:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:11:46.362 18:42:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:46.362 18:42:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:46.362 18:42:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:46.362 18:42:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:46.362 18:42:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:46.362 18:42:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.362 18:42:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:46.362 18:42:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.362 18:42:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.362 18:42:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.362 18:42:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:46.362 "name": "raid_bdev1", 00:11:46.362 "uuid": "1fef49c7-6cfb-4e78-965f-be254540baf1", 00:11:46.362 "strip_size_kb": 0, 00:11:46.362 "state": "online", 00:11:46.362 "raid_level": "raid1", 00:11:46.362 "superblock": false, 00:11:46.362 "num_base_bdevs": 4, 00:11:46.362 "num_base_bdevs_discovered": 3, 00:11:46.362 "num_base_bdevs_operational": 3, 00:11:46.362 "process": { 00:11:46.362 "type": "rebuild", 00:11:46.362 "target": "spare", 00:11:46.362 "progress": { 00:11:46.362 "blocks": 24576, 00:11:46.362 "percent": 37 00:11:46.362 } 00:11:46.362 }, 00:11:46.362 "base_bdevs_list": [ 00:11:46.362 { 00:11:46.362 "name": "spare", 00:11:46.362 "uuid": "5eaa8016-5c76-5609-8c85-be3aa50880d7", 00:11:46.362 "is_configured": true, 00:11:46.362 "data_offset": 0, 00:11:46.362 "data_size": 65536 00:11:46.362 }, 00:11:46.362 { 00:11:46.362 "name": null, 00:11:46.362 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:46.362 "is_configured": false, 00:11:46.362 "data_offset": 0, 00:11:46.362 "data_size": 65536 00:11:46.362 }, 00:11:46.362 { 00:11:46.362 "name": "BaseBdev3", 00:11:46.362 "uuid": "2f1f0ff4-3f84-5923-9fb8-33a2f5c3b595", 00:11:46.362 "is_configured": true, 00:11:46.362 "data_offset": 0, 00:11:46.362 "data_size": 65536 00:11:46.362 }, 00:11:46.362 { 00:11:46.362 "name": "BaseBdev4", 00:11:46.362 "uuid": "293f6dc5-3977-58a9-8bd8-4ead1f81cd1f", 00:11:46.362 "is_configured": true, 00:11:46.362 "data_offset": 0, 00:11:46.362 "data_size": 65536 00:11:46.362 } 00:11:46.362 ] 00:11:46.362 }' 00:11:46.362 18:42:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:46.362 18:42:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:46.362 18:42:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:46.362 18:42:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:46.362 18:42:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=363 00:11:46.362 18:42:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:46.362 18:42:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:46.362 18:42:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:46.362 18:42:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:46.362 18:42:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:46.362 18:42:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:46.622 18:42:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:46.622 18:42:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.622 18:42:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.622 18:42:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.622 18:42:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.622 18:42:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:46.622 "name": "raid_bdev1", 00:11:46.622 "uuid": "1fef49c7-6cfb-4e78-965f-be254540baf1", 00:11:46.622 "strip_size_kb": 0, 00:11:46.622 "state": "online", 00:11:46.622 "raid_level": "raid1", 00:11:46.622 "superblock": false, 00:11:46.622 "num_base_bdevs": 4, 00:11:46.622 "num_base_bdevs_discovered": 3, 00:11:46.622 "num_base_bdevs_operational": 3, 00:11:46.622 "process": { 00:11:46.622 "type": "rebuild", 00:11:46.622 "target": "spare", 00:11:46.622 "progress": { 00:11:46.622 "blocks": 26624, 00:11:46.622 "percent": 40 00:11:46.622 } 00:11:46.622 }, 00:11:46.622 "base_bdevs_list": [ 00:11:46.622 { 00:11:46.622 "name": "spare", 00:11:46.622 "uuid": "5eaa8016-5c76-5609-8c85-be3aa50880d7", 00:11:46.622 "is_configured": true, 00:11:46.622 "data_offset": 0, 00:11:46.622 "data_size": 65536 00:11:46.622 }, 00:11:46.622 { 00:11:46.622 "name": null, 00:11:46.622 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:46.622 "is_configured": false, 00:11:46.622 "data_offset": 0, 00:11:46.622 "data_size": 65536 00:11:46.622 }, 00:11:46.622 { 00:11:46.622 "name": "BaseBdev3", 00:11:46.622 "uuid": "2f1f0ff4-3f84-5923-9fb8-33a2f5c3b595", 00:11:46.622 "is_configured": true, 00:11:46.622 "data_offset": 0, 00:11:46.622 "data_size": 65536 00:11:46.622 }, 00:11:46.622 { 00:11:46.622 "name": "BaseBdev4", 00:11:46.622 "uuid": "293f6dc5-3977-58a9-8bd8-4ead1f81cd1f", 00:11:46.622 "is_configured": true, 00:11:46.622 "data_offset": 0, 00:11:46.622 "data_size": 65536 00:11:46.622 } 00:11:46.622 ] 00:11:46.622 }' 00:11:46.622 18:42:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:46.622 18:42:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:46.622 18:42:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:46.622 18:42:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:46.622 18:42:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:47.560 18:42:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:47.560 18:42:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:47.560 18:42:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:47.560 18:42:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:47.560 18:42:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:47.560 18:42:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:47.560 18:42:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:47.560 18:42:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:47.560 18:42:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.560 18:42:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.560 18:42:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.560 18:42:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:47.560 "name": "raid_bdev1", 00:11:47.560 "uuid": "1fef49c7-6cfb-4e78-965f-be254540baf1", 00:11:47.560 "strip_size_kb": 0, 00:11:47.560 "state": "online", 00:11:47.560 "raid_level": "raid1", 00:11:47.560 "superblock": false, 00:11:47.560 "num_base_bdevs": 4, 00:11:47.560 "num_base_bdevs_discovered": 3, 00:11:47.560 "num_base_bdevs_operational": 3, 00:11:47.560 "process": { 00:11:47.560 "type": "rebuild", 00:11:47.560 "target": "spare", 00:11:47.560 "progress": { 00:11:47.560 "blocks": 49152, 00:11:47.560 "percent": 75 00:11:47.560 } 00:11:47.560 }, 00:11:47.560 "base_bdevs_list": [ 00:11:47.560 { 00:11:47.560 "name": "spare", 00:11:47.560 "uuid": "5eaa8016-5c76-5609-8c85-be3aa50880d7", 00:11:47.560 "is_configured": true, 00:11:47.560 "data_offset": 0, 00:11:47.560 "data_size": 65536 00:11:47.560 }, 00:11:47.560 { 00:11:47.560 "name": null, 00:11:47.560 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:47.560 "is_configured": false, 00:11:47.560 "data_offset": 0, 00:11:47.560 "data_size": 65536 00:11:47.560 }, 00:11:47.560 { 00:11:47.560 "name": "BaseBdev3", 00:11:47.560 "uuid": "2f1f0ff4-3f84-5923-9fb8-33a2f5c3b595", 00:11:47.560 "is_configured": true, 00:11:47.560 "data_offset": 0, 00:11:47.560 "data_size": 65536 00:11:47.560 }, 00:11:47.560 { 00:11:47.560 "name": "BaseBdev4", 00:11:47.560 "uuid": "293f6dc5-3977-58a9-8bd8-4ead1f81cd1f", 00:11:47.560 "is_configured": true, 00:11:47.560 "data_offset": 0, 00:11:47.560 "data_size": 65536 00:11:47.560 } 00:11:47.560 ] 00:11:47.560 }' 00:11:47.560 18:42:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:47.820 18:42:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:47.820 18:42:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:47.820 18:42:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:47.820 18:42:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:48.390 [2024-12-15 18:42:48.654793] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:11:48.390 [2024-12-15 18:42:48.654941] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:11:48.390 [2024-12-15 18:42:48.655005] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:48.960 18:42:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:48.960 18:42:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:48.960 18:42:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:48.960 18:42:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:48.960 18:42:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:48.960 18:42:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:48.960 18:42:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.960 18:42:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:48.960 18:42:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.960 18:42:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.960 18:42:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.960 18:42:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:48.960 "name": "raid_bdev1", 00:11:48.960 "uuid": "1fef49c7-6cfb-4e78-965f-be254540baf1", 00:11:48.960 "strip_size_kb": 0, 00:11:48.960 "state": "online", 00:11:48.960 "raid_level": "raid1", 00:11:48.960 "superblock": false, 00:11:48.960 "num_base_bdevs": 4, 00:11:48.960 "num_base_bdevs_discovered": 3, 00:11:48.960 "num_base_bdevs_operational": 3, 00:11:48.960 "base_bdevs_list": [ 00:11:48.960 { 00:11:48.960 "name": "spare", 00:11:48.960 "uuid": "5eaa8016-5c76-5609-8c85-be3aa50880d7", 00:11:48.960 "is_configured": true, 00:11:48.960 "data_offset": 0, 00:11:48.960 "data_size": 65536 00:11:48.960 }, 00:11:48.960 { 00:11:48.960 "name": null, 00:11:48.960 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:48.960 "is_configured": false, 00:11:48.960 "data_offset": 0, 00:11:48.960 "data_size": 65536 00:11:48.960 }, 00:11:48.960 { 00:11:48.960 "name": "BaseBdev3", 00:11:48.960 "uuid": "2f1f0ff4-3f84-5923-9fb8-33a2f5c3b595", 00:11:48.960 "is_configured": true, 00:11:48.960 "data_offset": 0, 00:11:48.960 "data_size": 65536 00:11:48.960 }, 00:11:48.960 { 00:11:48.960 "name": "BaseBdev4", 00:11:48.960 "uuid": "293f6dc5-3977-58a9-8bd8-4ead1f81cd1f", 00:11:48.960 "is_configured": true, 00:11:48.960 "data_offset": 0, 00:11:48.960 "data_size": 65536 00:11:48.960 } 00:11:48.960 ] 00:11:48.960 }' 00:11:48.961 18:42:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:48.961 18:42:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:11:48.961 18:42:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:48.961 18:42:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:11:48.961 18:42:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:11:48.961 18:42:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:48.961 18:42:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:48.961 18:42:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:48.961 18:42:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:48.961 18:42:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:48.961 18:42:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:48.961 18:42:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.961 18:42:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.961 18:42:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.961 18:42:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.961 18:42:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:48.961 "name": "raid_bdev1", 00:11:48.961 "uuid": "1fef49c7-6cfb-4e78-965f-be254540baf1", 00:11:48.961 "strip_size_kb": 0, 00:11:48.961 "state": "online", 00:11:48.961 "raid_level": "raid1", 00:11:48.961 "superblock": false, 00:11:48.961 "num_base_bdevs": 4, 00:11:48.961 "num_base_bdevs_discovered": 3, 00:11:48.961 "num_base_bdevs_operational": 3, 00:11:48.961 "base_bdevs_list": [ 00:11:48.961 { 00:11:48.961 "name": "spare", 00:11:48.961 "uuid": "5eaa8016-5c76-5609-8c85-be3aa50880d7", 00:11:48.961 "is_configured": true, 00:11:48.961 "data_offset": 0, 00:11:48.961 "data_size": 65536 00:11:48.961 }, 00:11:48.961 { 00:11:48.961 "name": null, 00:11:48.961 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:48.961 "is_configured": false, 00:11:48.961 "data_offset": 0, 00:11:48.961 "data_size": 65536 00:11:48.961 }, 00:11:48.961 { 00:11:48.961 "name": "BaseBdev3", 00:11:48.961 "uuid": "2f1f0ff4-3f84-5923-9fb8-33a2f5c3b595", 00:11:48.961 "is_configured": true, 00:11:48.961 "data_offset": 0, 00:11:48.961 "data_size": 65536 00:11:48.961 }, 00:11:48.961 { 00:11:48.961 "name": "BaseBdev4", 00:11:48.961 "uuid": "293f6dc5-3977-58a9-8bd8-4ead1f81cd1f", 00:11:48.961 "is_configured": true, 00:11:48.961 "data_offset": 0, 00:11:48.961 "data_size": 65536 00:11:48.961 } 00:11:48.961 ] 00:11:48.961 }' 00:11:48.961 18:42:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:48.961 18:42:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:48.961 18:42:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:48.961 18:42:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:48.961 18:42:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:48.961 18:42:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:48.961 18:42:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:48.961 18:42:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:48.961 18:42:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:48.961 18:42:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:48.961 18:42:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:48.961 18:42:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:49.221 18:42:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:49.221 18:42:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:49.221 18:42:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:49.221 18:42:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:49.221 18:42:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.221 18:42:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.221 18:42:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.221 18:42:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:49.221 "name": "raid_bdev1", 00:11:49.221 "uuid": "1fef49c7-6cfb-4e78-965f-be254540baf1", 00:11:49.221 "strip_size_kb": 0, 00:11:49.221 "state": "online", 00:11:49.221 "raid_level": "raid1", 00:11:49.221 "superblock": false, 00:11:49.221 "num_base_bdevs": 4, 00:11:49.221 "num_base_bdevs_discovered": 3, 00:11:49.221 "num_base_bdevs_operational": 3, 00:11:49.221 "base_bdevs_list": [ 00:11:49.221 { 00:11:49.221 "name": "spare", 00:11:49.221 "uuid": "5eaa8016-5c76-5609-8c85-be3aa50880d7", 00:11:49.221 "is_configured": true, 00:11:49.221 "data_offset": 0, 00:11:49.221 "data_size": 65536 00:11:49.221 }, 00:11:49.221 { 00:11:49.221 "name": null, 00:11:49.221 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:49.221 "is_configured": false, 00:11:49.221 "data_offset": 0, 00:11:49.221 "data_size": 65536 00:11:49.221 }, 00:11:49.221 { 00:11:49.221 "name": "BaseBdev3", 00:11:49.221 "uuid": "2f1f0ff4-3f84-5923-9fb8-33a2f5c3b595", 00:11:49.221 "is_configured": true, 00:11:49.221 "data_offset": 0, 00:11:49.221 "data_size": 65536 00:11:49.221 }, 00:11:49.221 { 00:11:49.221 "name": "BaseBdev4", 00:11:49.221 "uuid": "293f6dc5-3977-58a9-8bd8-4ead1f81cd1f", 00:11:49.221 "is_configured": true, 00:11:49.221 "data_offset": 0, 00:11:49.221 "data_size": 65536 00:11:49.221 } 00:11:49.221 ] 00:11:49.221 }' 00:11:49.221 18:42:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:49.221 18:42:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.482 18:42:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:49.482 18:42:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.482 18:42:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.482 [2024-12-15 18:42:49.781420] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:49.482 [2024-12-15 18:42:49.781451] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:49.482 [2024-12-15 18:42:49.781550] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:49.482 [2024-12-15 18:42:49.781631] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:49.482 [2024-12-15 18:42:49.781642] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:11:49.482 18:42:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.482 18:42:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:49.482 18:42:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:11:49.482 18:42:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.482 18:42:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.482 18:42:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.482 18:42:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:11:49.482 18:42:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:11:49.482 18:42:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:11:49.482 18:42:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:11:49.482 18:42:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:49.482 18:42:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:11:49.482 18:42:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:49.482 18:42:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:49.482 18:42:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:49.482 18:42:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:11:49.482 18:42:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:49.482 18:42:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:49.482 18:42:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:11:49.742 /dev/nbd0 00:11:49.742 18:42:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:49.742 18:42:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:49.742 18:42:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:11:49.742 18:42:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:11:49.742 18:42:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:49.742 18:42:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:49.742 18:42:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:11:49.742 18:42:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:11:49.742 18:42:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:49.742 18:42:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:49.742 18:42:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:49.742 1+0 records in 00:11:49.742 1+0 records out 00:11:49.742 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000189025 s, 21.7 MB/s 00:11:49.742 18:42:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:49.742 18:42:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:11:49.742 18:42:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:49.742 18:42:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:49.742 18:42:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:11:49.742 18:42:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:49.742 18:42:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:49.742 18:42:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:11:50.002 /dev/nbd1 00:11:50.002 18:42:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:50.002 18:42:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:50.002 18:42:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:11:50.002 18:42:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:11:50.002 18:42:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:50.002 18:42:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:50.002 18:42:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:11:50.002 18:42:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:11:50.002 18:42:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:50.002 18:42:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:50.002 18:42:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:50.002 1+0 records in 00:11:50.002 1+0 records out 00:11:50.002 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000375862 s, 10.9 MB/s 00:11:50.002 18:42:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:50.002 18:42:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:11:50.002 18:42:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:50.002 18:42:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:50.002 18:42:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:11:50.002 18:42:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:50.002 18:42:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:50.002 18:42:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:11:50.002 18:42:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:11:50.002 18:42:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:50.002 18:42:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:50.002 18:42:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:50.002 18:42:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:11:50.002 18:42:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:50.002 18:42:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:11:50.262 18:42:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:50.262 18:42:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:50.262 18:42:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:50.262 18:42:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:50.262 18:42:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:50.262 18:42:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:50.262 18:42:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:11:50.262 18:42:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:11:50.262 18:42:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:50.262 18:42:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:11:50.522 18:42:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:50.522 18:42:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:50.522 18:42:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:50.522 18:42:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:50.522 18:42:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:50.522 18:42:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:50.522 18:42:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:11:50.522 18:42:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:11:50.522 18:42:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:11:50.522 18:42:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 90123 00:11:50.522 18:42:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 90123 ']' 00:11:50.522 18:42:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 90123 00:11:50.522 18:42:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:11:50.522 18:42:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:50.522 18:42:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90123 00:11:50.522 18:42:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:50.522 18:42:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:50.522 18:42:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90123' 00:11:50.522 killing process with pid 90123 00:11:50.522 18:42:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 90123 00:11:50.522 Received shutdown signal, test time was about 60.000000 seconds 00:11:50.522 00:11:50.522 Latency(us) 00:11:50.522 [2024-12-15T18:42:50.963Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:50.522 [2024-12-15T18:42:50.963Z] =================================================================================================================== 00:11:50.522 [2024-12-15T18:42:50.963Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:11:50.522 [2024-12-15 18:42:50.880884] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:50.522 18:42:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 90123 00:11:50.522 [2024-12-15 18:42:50.933417] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:50.782 18:42:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:11:50.782 00:11:50.782 real 0m15.378s 00:11:50.782 user 0m17.371s 00:11:50.782 sys 0m3.000s 00:11:50.782 18:42:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:50.782 ************************************ 00:11:50.782 END TEST raid_rebuild_test 00:11:50.782 ************************************ 00:11:50.782 18:42:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.782 18:42:51 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:11:50.782 18:42:51 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:11:50.782 18:42:51 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:50.782 18:42:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:51.042 ************************************ 00:11:51.042 START TEST raid_rebuild_test_sb 00:11:51.042 ************************************ 00:11:51.042 18:42:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true false true 00:11:51.042 18:42:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:11:51.042 18:42:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:11:51.042 18:42:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:11:51.042 18:42:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:11:51.042 18:42:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:11:51.042 18:42:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:11:51.042 18:42:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:51.042 18:42:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:11:51.042 18:42:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:51.042 18:42:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:51.042 18:42:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:11:51.042 18:42:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:51.042 18:42:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:51.042 18:42:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:11:51.042 18:42:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:51.042 18:42:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:51.042 18:42:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:11:51.042 18:42:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:51.042 18:42:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:51.042 18:42:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:51.042 18:42:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:11:51.042 18:42:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:11:51.042 18:42:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:11:51.042 18:42:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:11:51.042 18:42:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:11:51.042 18:42:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:11:51.042 18:42:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:11:51.042 18:42:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:11:51.042 18:42:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:11:51.042 18:42:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:11:51.042 18:42:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=90548 00:11:51.042 18:42:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 90548 00:11:51.042 18:42:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:11:51.042 18:42:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 90548 ']' 00:11:51.042 18:42:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:51.042 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:51.042 18:42:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:51.042 18:42:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:51.042 18:42:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:51.042 18:42:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.042 I/O size of 3145728 is greater than zero copy threshold (65536). 00:11:51.042 Zero copy mechanism will not be used. 00:11:51.042 [2024-12-15 18:42:51.319817] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:11:51.042 [2024-12-15 18:42:51.319948] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90548 ] 00:11:51.302 [2024-12-15 18:42:51.486031] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:51.302 [2024-12-15 18:42:51.510876] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:11:51.302 [2024-12-15 18:42:51.553400] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:51.302 [2024-12-15 18:42:51.553436] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:51.871 18:42:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:51.871 18:42:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:11:51.871 18:42:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:51.871 18:42:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:51.871 18:42:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.871 18:42:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.871 BaseBdev1_malloc 00:11:51.871 18:42:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.871 18:42:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:11:51.871 18:42:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.871 18:42:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.871 [2024-12-15 18:42:52.201202] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:11:51.871 [2024-12-15 18:42:52.201320] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:51.871 [2024-12-15 18:42:52.201362] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:51.871 [2024-12-15 18:42:52.201374] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:51.871 [2024-12-15 18:42:52.203527] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:51.871 [2024-12-15 18:42:52.203563] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:51.871 BaseBdev1 00:11:51.871 18:42:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.871 18:42:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:51.871 18:42:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:51.871 18:42:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.871 18:42:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.871 BaseBdev2_malloc 00:11:51.871 18:42:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.871 18:42:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:11:51.871 18:42:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.871 18:42:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.871 [2024-12-15 18:42:52.229709] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:11:51.871 [2024-12-15 18:42:52.229762] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:51.871 [2024-12-15 18:42:52.229782] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:51.871 [2024-12-15 18:42:52.229790] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:51.871 [2024-12-15 18:42:52.231871] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:51.871 [2024-12-15 18:42:52.231901] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:51.871 BaseBdev2 00:11:51.871 18:42:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.871 18:42:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:51.871 18:42:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:51.871 18:42:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.872 18:42:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.872 BaseBdev3_malloc 00:11:51.872 18:42:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.872 18:42:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:11:51.872 18:42:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.872 18:42:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.872 [2024-12-15 18:42:52.258244] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:11:51.872 [2024-12-15 18:42:52.258332] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:51.872 [2024-12-15 18:42:52.258378] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:51.872 [2024-12-15 18:42:52.258387] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:51.872 [2024-12-15 18:42:52.260412] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:51.872 [2024-12-15 18:42:52.260456] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:51.872 BaseBdev3 00:11:51.872 18:42:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.872 18:42:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:51.872 18:42:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:51.872 18:42:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.872 18:42:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.872 BaseBdev4_malloc 00:11:51.872 18:42:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.872 18:42:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:11:51.872 18:42:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.872 18:42:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.872 [2024-12-15 18:42:52.296661] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:11:51.872 [2024-12-15 18:42:52.296719] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:51.872 [2024-12-15 18:42:52.296746] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:11:51.872 [2024-12-15 18:42:52.296755] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:51.872 [2024-12-15 18:42:52.298937] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:51.872 [2024-12-15 18:42:52.299028] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:51.872 BaseBdev4 00:11:51.872 18:42:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.872 18:42:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:11:51.872 18:42:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.872 18:42:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.132 spare_malloc 00:11:52.132 18:42:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.132 18:42:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:11:52.132 18:42:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.132 18:42:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.132 spare_delay 00:11:52.132 18:42:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.132 18:42:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:52.132 18:42:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.132 18:42:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.132 [2024-12-15 18:42:52.337365] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:52.132 [2024-12-15 18:42:52.337412] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:52.132 [2024-12-15 18:42:52.337431] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:11:52.132 [2024-12-15 18:42:52.337440] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:52.132 [2024-12-15 18:42:52.339530] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:52.132 [2024-12-15 18:42:52.339598] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:52.132 spare 00:11:52.132 18:42:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.132 18:42:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:11:52.132 18:42:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.132 18:42:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.132 [2024-12-15 18:42:52.349427] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:52.132 [2024-12-15 18:42:52.351341] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:52.132 [2024-12-15 18:42:52.351426] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:52.132 [2024-12-15 18:42:52.351470] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:52.132 [2024-12-15 18:42:52.351637] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:11:52.132 [2024-12-15 18:42:52.351649] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:52.132 [2024-12-15 18:42:52.351957] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:52.132 [2024-12-15 18:42:52.352096] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:11:52.132 [2024-12-15 18:42:52.352109] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:11:52.132 [2024-12-15 18:42:52.352239] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:52.132 18:42:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.132 18:42:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:52.132 18:42:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:52.132 18:42:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:52.132 18:42:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:52.132 18:42:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:52.132 18:42:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:52.132 18:42:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:52.132 18:42:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:52.132 18:42:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:52.132 18:42:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:52.132 18:42:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:52.132 18:42:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:52.132 18:42:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.132 18:42:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.132 18:42:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.132 18:42:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:52.132 "name": "raid_bdev1", 00:11:52.132 "uuid": "a774cca3-2ad1-4a5a-94ec-c2b451dafcc4", 00:11:52.132 "strip_size_kb": 0, 00:11:52.132 "state": "online", 00:11:52.132 "raid_level": "raid1", 00:11:52.132 "superblock": true, 00:11:52.133 "num_base_bdevs": 4, 00:11:52.133 "num_base_bdevs_discovered": 4, 00:11:52.133 "num_base_bdevs_operational": 4, 00:11:52.133 "base_bdevs_list": [ 00:11:52.133 { 00:11:52.133 "name": "BaseBdev1", 00:11:52.133 "uuid": "89305bdc-6bec-58c5-8ea6-31a4b4398325", 00:11:52.133 "is_configured": true, 00:11:52.133 "data_offset": 2048, 00:11:52.133 "data_size": 63488 00:11:52.133 }, 00:11:52.133 { 00:11:52.133 "name": "BaseBdev2", 00:11:52.133 "uuid": "2ce33e1f-f0df-5b65-a32e-09452e07b947", 00:11:52.133 "is_configured": true, 00:11:52.133 "data_offset": 2048, 00:11:52.133 "data_size": 63488 00:11:52.133 }, 00:11:52.133 { 00:11:52.133 "name": "BaseBdev3", 00:11:52.133 "uuid": "ddb6ff67-f5b1-55c5-8881-8a22591db29e", 00:11:52.133 "is_configured": true, 00:11:52.133 "data_offset": 2048, 00:11:52.133 "data_size": 63488 00:11:52.133 }, 00:11:52.133 { 00:11:52.133 "name": "BaseBdev4", 00:11:52.133 "uuid": "ce4fe721-9dd5-5924-ad1d-a03e800b08e9", 00:11:52.133 "is_configured": true, 00:11:52.133 "data_offset": 2048, 00:11:52.133 "data_size": 63488 00:11:52.133 } 00:11:52.133 ] 00:11:52.133 }' 00:11:52.133 18:42:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:52.133 18:42:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.393 18:42:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:52.393 18:42:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:11:52.393 18:42:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.393 18:42:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.393 [2024-12-15 18:42:52.785002] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:52.393 18:42:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.393 18:42:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:11:52.393 18:42:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:11:52.393 18:42:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:52.393 18:42:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.393 18:42:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.653 18:42:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.653 18:42:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:11:52.653 18:42:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:11:52.653 18:42:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:11:52.653 18:42:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:11:52.653 18:42:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:11:52.653 18:42:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:52.653 18:42:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:11:52.653 18:42:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:52.653 18:42:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:11:52.653 18:42:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:52.653 18:42:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:11:52.653 18:42:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:52.653 18:42:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:52.653 18:42:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:11:52.653 [2024-12-15 18:42:53.056358] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:52.653 /dev/nbd0 00:11:52.912 18:42:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:52.912 18:42:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:52.912 18:42:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:11:52.912 18:42:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:11:52.912 18:42:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:52.912 18:42:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:52.912 18:42:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:11:52.912 18:42:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:11:52.912 18:42:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:52.912 18:42:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:52.912 18:42:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:52.912 1+0 records in 00:11:52.912 1+0 records out 00:11:52.912 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000570046 s, 7.2 MB/s 00:11:52.912 18:42:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:52.912 18:42:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:11:52.912 18:42:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:52.912 18:42:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:52.912 18:42:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:11:52.912 18:42:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:52.912 18:42:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:52.912 18:42:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:11:52.912 18:42:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:11:52.912 18:42:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:11:58.231 63488+0 records in 00:11:58.231 63488+0 records out 00:11:58.231 32505856 bytes (33 MB, 31 MiB) copied, 5.49615 s, 5.9 MB/s 00:11:58.231 18:42:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:11:58.231 18:42:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:58.231 18:42:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:11:58.231 18:42:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:58.231 18:42:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:11:58.231 18:42:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:58.231 18:42:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:11:58.491 [2024-12-15 18:42:58.802127] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:58.491 18:42:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:58.491 18:42:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:58.491 18:42:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:58.491 18:42:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:58.491 18:42:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:58.491 18:42:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:58.491 18:42:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:11:58.491 18:42:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:11:58.491 18:42:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:11:58.491 18:42:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.491 18:42:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.491 [2024-12-15 18:42:58.838041] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:58.491 18:42:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.491 18:42:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:58.491 18:42:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:58.491 18:42:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:58.491 18:42:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:58.492 18:42:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:58.492 18:42:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:58.492 18:42:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:58.492 18:42:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:58.492 18:42:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:58.492 18:42:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:58.492 18:42:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:58.492 18:42:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.492 18:42:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.492 18:42:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.492 18:42:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.492 18:42:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:58.492 "name": "raid_bdev1", 00:11:58.492 "uuid": "a774cca3-2ad1-4a5a-94ec-c2b451dafcc4", 00:11:58.492 "strip_size_kb": 0, 00:11:58.492 "state": "online", 00:11:58.492 "raid_level": "raid1", 00:11:58.492 "superblock": true, 00:11:58.492 "num_base_bdevs": 4, 00:11:58.492 "num_base_bdevs_discovered": 3, 00:11:58.492 "num_base_bdevs_operational": 3, 00:11:58.492 "base_bdevs_list": [ 00:11:58.492 { 00:11:58.492 "name": null, 00:11:58.492 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:58.492 "is_configured": false, 00:11:58.492 "data_offset": 0, 00:11:58.492 "data_size": 63488 00:11:58.492 }, 00:11:58.492 { 00:11:58.492 "name": "BaseBdev2", 00:11:58.492 "uuid": "2ce33e1f-f0df-5b65-a32e-09452e07b947", 00:11:58.492 "is_configured": true, 00:11:58.492 "data_offset": 2048, 00:11:58.492 "data_size": 63488 00:11:58.492 }, 00:11:58.492 { 00:11:58.492 "name": "BaseBdev3", 00:11:58.492 "uuid": "ddb6ff67-f5b1-55c5-8881-8a22591db29e", 00:11:58.492 "is_configured": true, 00:11:58.492 "data_offset": 2048, 00:11:58.492 "data_size": 63488 00:11:58.492 }, 00:11:58.492 { 00:11:58.492 "name": "BaseBdev4", 00:11:58.492 "uuid": "ce4fe721-9dd5-5924-ad1d-a03e800b08e9", 00:11:58.492 "is_configured": true, 00:11:58.492 "data_offset": 2048, 00:11:58.492 "data_size": 63488 00:11:58.492 } 00:11:58.492 ] 00:11:58.492 }' 00:11:58.492 18:42:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:58.492 18:42:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.061 18:42:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:59.061 18:42:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.061 18:42:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.061 [2024-12-15 18:42:59.257331] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:59.061 [2024-12-15 18:42:59.261526] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3360 00:11:59.061 18:42:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.061 18:42:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:11:59.061 [2024-12-15 18:42:59.263437] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:00.001 18:43:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:00.001 18:43:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:00.001 18:43:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:00.001 18:43:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:00.001 18:43:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:00.001 18:43:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:00.001 18:43:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:00.001 18:43:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.001 18:43:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.001 18:43:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.001 18:43:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:00.001 "name": "raid_bdev1", 00:12:00.001 "uuid": "a774cca3-2ad1-4a5a-94ec-c2b451dafcc4", 00:12:00.001 "strip_size_kb": 0, 00:12:00.001 "state": "online", 00:12:00.001 "raid_level": "raid1", 00:12:00.001 "superblock": true, 00:12:00.001 "num_base_bdevs": 4, 00:12:00.001 "num_base_bdevs_discovered": 4, 00:12:00.001 "num_base_bdevs_operational": 4, 00:12:00.001 "process": { 00:12:00.001 "type": "rebuild", 00:12:00.001 "target": "spare", 00:12:00.001 "progress": { 00:12:00.001 "blocks": 20480, 00:12:00.001 "percent": 32 00:12:00.001 } 00:12:00.001 }, 00:12:00.001 "base_bdevs_list": [ 00:12:00.001 { 00:12:00.001 "name": "spare", 00:12:00.001 "uuid": "bbfa697c-00c5-56f4-b910-6d65d2fde527", 00:12:00.001 "is_configured": true, 00:12:00.001 "data_offset": 2048, 00:12:00.001 "data_size": 63488 00:12:00.001 }, 00:12:00.001 { 00:12:00.001 "name": "BaseBdev2", 00:12:00.001 "uuid": "2ce33e1f-f0df-5b65-a32e-09452e07b947", 00:12:00.001 "is_configured": true, 00:12:00.001 "data_offset": 2048, 00:12:00.001 "data_size": 63488 00:12:00.001 }, 00:12:00.001 { 00:12:00.001 "name": "BaseBdev3", 00:12:00.001 "uuid": "ddb6ff67-f5b1-55c5-8881-8a22591db29e", 00:12:00.001 "is_configured": true, 00:12:00.001 "data_offset": 2048, 00:12:00.001 "data_size": 63488 00:12:00.001 }, 00:12:00.001 { 00:12:00.001 "name": "BaseBdev4", 00:12:00.001 "uuid": "ce4fe721-9dd5-5924-ad1d-a03e800b08e9", 00:12:00.001 "is_configured": true, 00:12:00.001 "data_offset": 2048, 00:12:00.001 "data_size": 63488 00:12:00.001 } 00:12:00.001 ] 00:12:00.001 }' 00:12:00.001 18:43:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:00.001 18:43:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:00.001 18:43:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:00.001 18:43:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:00.001 18:43:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:00.001 18:43:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.002 18:43:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.002 [2024-12-15 18:43:00.428475] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:00.262 [2024-12-15 18:43:00.468388] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:00.262 [2024-12-15 18:43:00.468474] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:00.262 [2024-12-15 18:43:00.468498] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:00.262 [2024-12-15 18:43:00.468506] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:00.262 18:43:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.262 18:43:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:00.262 18:43:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:00.262 18:43:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:00.262 18:43:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:00.262 18:43:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:00.262 18:43:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:00.262 18:43:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:00.262 18:43:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:00.262 18:43:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:00.262 18:43:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:00.262 18:43:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:00.262 18:43:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.262 18:43:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:00.262 18:43:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.262 18:43:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.262 18:43:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:00.262 "name": "raid_bdev1", 00:12:00.262 "uuid": "a774cca3-2ad1-4a5a-94ec-c2b451dafcc4", 00:12:00.262 "strip_size_kb": 0, 00:12:00.262 "state": "online", 00:12:00.262 "raid_level": "raid1", 00:12:00.262 "superblock": true, 00:12:00.262 "num_base_bdevs": 4, 00:12:00.262 "num_base_bdevs_discovered": 3, 00:12:00.262 "num_base_bdevs_operational": 3, 00:12:00.262 "base_bdevs_list": [ 00:12:00.262 { 00:12:00.262 "name": null, 00:12:00.262 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:00.262 "is_configured": false, 00:12:00.262 "data_offset": 0, 00:12:00.262 "data_size": 63488 00:12:00.262 }, 00:12:00.262 { 00:12:00.262 "name": "BaseBdev2", 00:12:00.262 "uuid": "2ce33e1f-f0df-5b65-a32e-09452e07b947", 00:12:00.262 "is_configured": true, 00:12:00.262 "data_offset": 2048, 00:12:00.262 "data_size": 63488 00:12:00.262 }, 00:12:00.262 { 00:12:00.262 "name": "BaseBdev3", 00:12:00.262 "uuid": "ddb6ff67-f5b1-55c5-8881-8a22591db29e", 00:12:00.262 "is_configured": true, 00:12:00.262 "data_offset": 2048, 00:12:00.262 "data_size": 63488 00:12:00.262 }, 00:12:00.262 { 00:12:00.262 "name": "BaseBdev4", 00:12:00.262 "uuid": "ce4fe721-9dd5-5924-ad1d-a03e800b08e9", 00:12:00.262 "is_configured": true, 00:12:00.262 "data_offset": 2048, 00:12:00.262 "data_size": 63488 00:12:00.262 } 00:12:00.262 ] 00:12:00.262 }' 00:12:00.262 18:43:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:00.262 18:43:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.522 18:43:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:00.522 18:43:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:00.522 18:43:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:00.522 18:43:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:00.522 18:43:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:00.522 18:43:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:00.522 18:43:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.522 18:43:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.522 18:43:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:00.522 18:43:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.782 18:43:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:00.782 "name": "raid_bdev1", 00:12:00.782 "uuid": "a774cca3-2ad1-4a5a-94ec-c2b451dafcc4", 00:12:00.782 "strip_size_kb": 0, 00:12:00.782 "state": "online", 00:12:00.782 "raid_level": "raid1", 00:12:00.782 "superblock": true, 00:12:00.782 "num_base_bdevs": 4, 00:12:00.782 "num_base_bdevs_discovered": 3, 00:12:00.782 "num_base_bdevs_operational": 3, 00:12:00.782 "base_bdevs_list": [ 00:12:00.782 { 00:12:00.782 "name": null, 00:12:00.782 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:00.782 "is_configured": false, 00:12:00.782 "data_offset": 0, 00:12:00.782 "data_size": 63488 00:12:00.782 }, 00:12:00.782 { 00:12:00.782 "name": "BaseBdev2", 00:12:00.782 "uuid": "2ce33e1f-f0df-5b65-a32e-09452e07b947", 00:12:00.782 "is_configured": true, 00:12:00.782 "data_offset": 2048, 00:12:00.782 "data_size": 63488 00:12:00.782 }, 00:12:00.782 { 00:12:00.782 "name": "BaseBdev3", 00:12:00.782 "uuid": "ddb6ff67-f5b1-55c5-8881-8a22591db29e", 00:12:00.782 "is_configured": true, 00:12:00.782 "data_offset": 2048, 00:12:00.782 "data_size": 63488 00:12:00.782 }, 00:12:00.782 { 00:12:00.782 "name": "BaseBdev4", 00:12:00.782 "uuid": "ce4fe721-9dd5-5924-ad1d-a03e800b08e9", 00:12:00.782 "is_configured": true, 00:12:00.782 "data_offset": 2048, 00:12:00.782 "data_size": 63488 00:12:00.782 } 00:12:00.782 ] 00:12:00.782 }' 00:12:00.782 18:43:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:00.782 18:43:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:00.782 18:43:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:00.782 18:43:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:00.782 18:43:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:00.782 18:43:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.782 18:43:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.782 [2024-12-15 18:43:01.044169] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:00.782 [2024-12-15 18:43:01.048323] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3430 00:12:00.782 18:43:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.782 18:43:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:00.782 [2024-12-15 18:43:01.050268] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:01.722 18:43:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:01.722 18:43:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:01.722 18:43:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:01.722 18:43:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:01.722 18:43:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:01.722 18:43:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.722 18:43:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.722 18:43:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:01.722 18:43:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.722 18:43:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.722 18:43:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:01.722 "name": "raid_bdev1", 00:12:01.722 "uuid": "a774cca3-2ad1-4a5a-94ec-c2b451dafcc4", 00:12:01.722 "strip_size_kb": 0, 00:12:01.722 "state": "online", 00:12:01.722 "raid_level": "raid1", 00:12:01.722 "superblock": true, 00:12:01.722 "num_base_bdevs": 4, 00:12:01.722 "num_base_bdevs_discovered": 4, 00:12:01.722 "num_base_bdevs_operational": 4, 00:12:01.722 "process": { 00:12:01.722 "type": "rebuild", 00:12:01.722 "target": "spare", 00:12:01.722 "progress": { 00:12:01.722 "blocks": 20480, 00:12:01.722 "percent": 32 00:12:01.723 } 00:12:01.723 }, 00:12:01.723 "base_bdevs_list": [ 00:12:01.723 { 00:12:01.723 "name": "spare", 00:12:01.723 "uuid": "bbfa697c-00c5-56f4-b910-6d65d2fde527", 00:12:01.723 "is_configured": true, 00:12:01.723 "data_offset": 2048, 00:12:01.723 "data_size": 63488 00:12:01.723 }, 00:12:01.723 { 00:12:01.723 "name": "BaseBdev2", 00:12:01.723 "uuid": "2ce33e1f-f0df-5b65-a32e-09452e07b947", 00:12:01.723 "is_configured": true, 00:12:01.723 "data_offset": 2048, 00:12:01.723 "data_size": 63488 00:12:01.723 }, 00:12:01.723 { 00:12:01.723 "name": "BaseBdev3", 00:12:01.723 "uuid": "ddb6ff67-f5b1-55c5-8881-8a22591db29e", 00:12:01.723 "is_configured": true, 00:12:01.723 "data_offset": 2048, 00:12:01.723 "data_size": 63488 00:12:01.723 }, 00:12:01.723 { 00:12:01.723 "name": "BaseBdev4", 00:12:01.723 "uuid": "ce4fe721-9dd5-5924-ad1d-a03e800b08e9", 00:12:01.723 "is_configured": true, 00:12:01.723 "data_offset": 2048, 00:12:01.723 "data_size": 63488 00:12:01.723 } 00:12:01.723 ] 00:12:01.723 }' 00:12:01.723 18:43:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:01.723 18:43:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:01.723 18:43:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:01.983 18:43:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:01.983 18:43:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:12:01.983 18:43:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:12:01.983 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:12:01.983 18:43:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:12:01.983 18:43:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:01.983 18:43:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:12:01.983 18:43:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:01.983 18:43:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.983 18:43:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.983 [2024-12-15 18:43:02.194888] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:01.983 [2024-12-15 18:43:02.354645] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000ca3430 00:12:01.983 18:43:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.983 18:43:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:12:01.983 18:43:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:12:01.983 18:43:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:01.983 18:43:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:01.983 18:43:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:01.983 18:43:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:01.983 18:43:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:01.983 18:43:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.983 18:43:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.983 18:43:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.983 18:43:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:01.983 18:43:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.983 18:43:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:01.983 "name": "raid_bdev1", 00:12:01.983 "uuid": "a774cca3-2ad1-4a5a-94ec-c2b451dafcc4", 00:12:01.983 "strip_size_kb": 0, 00:12:01.983 "state": "online", 00:12:01.983 "raid_level": "raid1", 00:12:01.983 "superblock": true, 00:12:01.983 "num_base_bdevs": 4, 00:12:01.983 "num_base_bdevs_discovered": 3, 00:12:01.983 "num_base_bdevs_operational": 3, 00:12:01.983 "process": { 00:12:01.983 "type": "rebuild", 00:12:01.983 "target": "spare", 00:12:01.983 "progress": { 00:12:01.983 "blocks": 24576, 00:12:01.983 "percent": 38 00:12:01.983 } 00:12:01.983 }, 00:12:01.983 "base_bdevs_list": [ 00:12:01.983 { 00:12:01.983 "name": "spare", 00:12:01.983 "uuid": "bbfa697c-00c5-56f4-b910-6d65d2fde527", 00:12:01.983 "is_configured": true, 00:12:01.983 "data_offset": 2048, 00:12:01.983 "data_size": 63488 00:12:01.983 }, 00:12:01.983 { 00:12:01.983 "name": null, 00:12:01.983 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:01.983 "is_configured": false, 00:12:01.983 "data_offset": 0, 00:12:01.983 "data_size": 63488 00:12:01.983 }, 00:12:01.983 { 00:12:01.983 "name": "BaseBdev3", 00:12:01.983 "uuid": "ddb6ff67-f5b1-55c5-8881-8a22591db29e", 00:12:01.983 "is_configured": true, 00:12:01.983 "data_offset": 2048, 00:12:01.983 "data_size": 63488 00:12:01.983 }, 00:12:01.983 { 00:12:01.983 "name": "BaseBdev4", 00:12:01.983 "uuid": "ce4fe721-9dd5-5924-ad1d-a03e800b08e9", 00:12:01.983 "is_configured": true, 00:12:01.983 "data_offset": 2048, 00:12:01.983 "data_size": 63488 00:12:01.983 } 00:12:01.983 ] 00:12:01.983 }' 00:12:01.983 18:43:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:02.244 18:43:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:02.244 18:43:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:02.244 18:43:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:02.244 18:43:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=379 00:12:02.244 18:43:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:02.244 18:43:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:02.244 18:43:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:02.244 18:43:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:02.244 18:43:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:02.244 18:43:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:02.244 18:43:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.244 18:43:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:02.244 18:43:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.244 18:43:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.244 18:43:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.244 18:43:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:02.244 "name": "raid_bdev1", 00:12:02.244 "uuid": "a774cca3-2ad1-4a5a-94ec-c2b451dafcc4", 00:12:02.244 "strip_size_kb": 0, 00:12:02.244 "state": "online", 00:12:02.244 "raid_level": "raid1", 00:12:02.244 "superblock": true, 00:12:02.244 "num_base_bdevs": 4, 00:12:02.244 "num_base_bdevs_discovered": 3, 00:12:02.244 "num_base_bdevs_operational": 3, 00:12:02.244 "process": { 00:12:02.244 "type": "rebuild", 00:12:02.244 "target": "spare", 00:12:02.244 "progress": { 00:12:02.244 "blocks": 26624, 00:12:02.244 "percent": 41 00:12:02.244 } 00:12:02.244 }, 00:12:02.244 "base_bdevs_list": [ 00:12:02.244 { 00:12:02.244 "name": "spare", 00:12:02.244 "uuid": "bbfa697c-00c5-56f4-b910-6d65d2fde527", 00:12:02.244 "is_configured": true, 00:12:02.244 "data_offset": 2048, 00:12:02.244 "data_size": 63488 00:12:02.244 }, 00:12:02.244 { 00:12:02.244 "name": null, 00:12:02.244 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:02.244 "is_configured": false, 00:12:02.244 "data_offset": 0, 00:12:02.244 "data_size": 63488 00:12:02.244 }, 00:12:02.244 { 00:12:02.244 "name": "BaseBdev3", 00:12:02.244 "uuid": "ddb6ff67-f5b1-55c5-8881-8a22591db29e", 00:12:02.244 "is_configured": true, 00:12:02.244 "data_offset": 2048, 00:12:02.244 "data_size": 63488 00:12:02.244 }, 00:12:02.244 { 00:12:02.244 "name": "BaseBdev4", 00:12:02.244 "uuid": "ce4fe721-9dd5-5924-ad1d-a03e800b08e9", 00:12:02.244 "is_configured": true, 00:12:02.244 "data_offset": 2048, 00:12:02.244 "data_size": 63488 00:12:02.244 } 00:12:02.244 ] 00:12:02.244 }' 00:12:02.244 18:43:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:02.244 18:43:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:02.244 18:43:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:02.244 18:43:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:02.244 18:43:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:03.625 18:43:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:03.625 18:43:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:03.625 18:43:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:03.625 18:43:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:03.625 18:43:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:03.625 18:43:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:03.625 18:43:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:03.625 18:43:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:03.625 18:43:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.625 18:43:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.625 18:43:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.625 18:43:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:03.625 "name": "raid_bdev1", 00:12:03.625 "uuid": "a774cca3-2ad1-4a5a-94ec-c2b451dafcc4", 00:12:03.625 "strip_size_kb": 0, 00:12:03.625 "state": "online", 00:12:03.625 "raid_level": "raid1", 00:12:03.625 "superblock": true, 00:12:03.625 "num_base_bdevs": 4, 00:12:03.625 "num_base_bdevs_discovered": 3, 00:12:03.625 "num_base_bdevs_operational": 3, 00:12:03.625 "process": { 00:12:03.625 "type": "rebuild", 00:12:03.625 "target": "spare", 00:12:03.625 "progress": { 00:12:03.625 "blocks": 51200, 00:12:03.625 "percent": 80 00:12:03.625 } 00:12:03.625 }, 00:12:03.625 "base_bdevs_list": [ 00:12:03.625 { 00:12:03.625 "name": "spare", 00:12:03.625 "uuid": "bbfa697c-00c5-56f4-b910-6d65d2fde527", 00:12:03.625 "is_configured": true, 00:12:03.625 "data_offset": 2048, 00:12:03.625 "data_size": 63488 00:12:03.625 }, 00:12:03.625 { 00:12:03.625 "name": null, 00:12:03.625 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:03.625 "is_configured": false, 00:12:03.625 "data_offset": 0, 00:12:03.625 "data_size": 63488 00:12:03.625 }, 00:12:03.625 { 00:12:03.625 "name": "BaseBdev3", 00:12:03.625 "uuid": "ddb6ff67-f5b1-55c5-8881-8a22591db29e", 00:12:03.625 "is_configured": true, 00:12:03.625 "data_offset": 2048, 00:12:03.625 "data_size": 63488 00:12:03.625 }, 00:12:03.625 { 00:12:03.625 "name": "BaseBdev4", 00:12:03.625 "uuid": "ce4fe721-9dd5-5924-ad1d-a03e800b08e9", 00:12:03.625 "is_configured": true, 00:12:03.625 "data_offset": 2048, 00:12:03.625 "data_size": 63488 00:12:03.625 } 00:12:03.625 ] 00:12:03.625 }' 00:12:03.625 18:43:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:03.625 18:43:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:03.625 18:43:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:03.625 18:43:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:03.625 18:43:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:03.885 [2024-12-15 18:43:04.262371] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:03.885 [2024-12-15 18:43:04.262558] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:03.885 [2024-12-15 18:43:04.262716] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:04.456 18:43:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:04.456 18:43:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:04.456 18:43:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:04.456 18:43:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:04.456 18:43:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:04.456 18:43:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:04.456 18:43:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.456 18:43:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:04.456 18:43:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.456 18:43:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.456 18:43:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.456 18:43:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:04.456 "name": "raid_bdev1", 00:12:04.456 "uuid": "a774cca3-2ad1-4a5a-94ec-c2b451dafcc4", 00:12:04.456 "strip_size_kb": 0, 00:12:04.456 "state": "online", 00:12:04.456 "raid_level": "raid1", 00:12:04.456 "superblock": true, 00:12:04.456 "num_base_bdevs": 4, 00:12:04.456 "num_base_bdevs_discovered": 3, 00:12:04.456 "num_base_bdevs_operational": 3, 00:12:04.456 "base_bdevs_list": [ 00:12:04.456 { 00:12:04.456 "name": "spare", 00:12:04.456 "uuid": "bbfa697c-00c5-56f4-b910-6d65d2fde527", 00:12:04.456 "is_configured": true, 00:12:04.456 "data_offset": 2048, 00:12:04.456 "data_size": 63488 00:12:04.456 }, 00:12:04.456 { 00:12:04.456 "name": null, 00:12:04.456 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:04.456 "is_configured": false, 00:12:04.456 "data_offset": 0, 00:12:04.456 "data_size": 63488 00:12:04.456 }, 00:12:04.456 { 00:12:04.456 "name": "BaseBdev3", 00:12:04.456 "uuid": "ddb6ff67-f5b1-55c5-8881-8a22591db29e", 00:12:04.456 "is_configured": true, 00:12:04.456 "data_offset": 2048, 00:12:04.456 "data_size": 63488 00:12:04.456 }, 00:12:04.456 { 00:12:04.456 "name": "BaseBdev4", 00:12:04.456 "uuid": "ce4fe721-9dd5-5924-ad1d-a03e800b08e9", 00:12:04.456 "is_configured": true, 00:12:04.456 "data_offset": 2048, 00:12:04.456 "data_size": 63488 00:12:04.456 } 00:12:04.456 ] 00:12:04.456 }' 00:12:04.456 18:43:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:04.719 18:43:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:04.719 18:43:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:04.719 18:43:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:04.719 18:43:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:12:04.719 18:43:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:04.719 18:43:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:04.719 18:43:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:04.719 18:43:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:04.719 18:43:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:04.719 18:43:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.719 18:43:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.719 18:43:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.719 18:43:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:04.719 18:43:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.719 18:43:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:04.719 "name": "raid_bdev1", 00:12:04.719 "uuid": "a774cca3-2ad1-4a5a-94ec-c2b451dafcc4", 00:12:04.719 "strip_size_kb": 0, 00:12:04.719 "state": "online", 00:12:04.719 "raid_level": "raid1", 00:12:04.719 "superblock": true, 00:12:04.719 "num_base_bdevs": 4, 00:12:04.719 "num_base_bdevs_discovered": 3, 00:12:04.719 "num_base_bdevs_operational": 3, 00:12:04.719 "base_bdevs_list": [ 00:12:04.719 { 00:12:04.719 "name": "spare", 00:12:04.719 "uuid": "bbfa697c-00c5-56f4-b910-6d65d2fde527", 00:12:04.719 "is_configured": true, 00:12:04.719 "data_offset": 2048, 00:12:04.719 "data_size": 63488 00:12:04.719 }, 00:12:04.719 { 00:12:04.719 "name": null, 00:12:04.719 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:04.719 "is_configured": false, 00:12:04.719 "data_offset": 0, 00:12:04.719 "data_size": 63488 00:12:04.719 }, 00:12:04.719 { 00:12:04.719 "name": "BaseBdev3", 00:12:04.719 "uuid": "ddb6ff67-f5b1-55c5-8881-8a22591db29e", 00:12:04.719 "is_configured": true, 00:12:04.719 "data_offset": 2048, 00:12:04.719 "data_size": 63488 00:12:04.719 }, 00:12:04.719 { 00:12:04.719 "name": "BaseBdev4", 00:12:04.719 "uuid": "ce4fe721-9dd5-5924-ad1d-a03e800b08e9", 00:12:04.719 "is_configured": true, 00:12:04.719 "data_offset": 2048, 00:12:04.719 "data_size": 63488 00:12:04.719 } 00:12:04.719 ] 00:12:04.719 }' 00:12:04.719 18:43:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:04.719 18:43:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:04.719 18:43:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:04.719 18:43:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:04.719 18:43:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:04.719 18:43:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:04.719 18:43:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:04.719 18:43:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:04.719 18:43:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:04.720 18:43:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:04.720 18:43:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:04.720 18:43:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:04.720 18:43:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:04.720 18:43:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:04.720 18:43:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:04.720 18:43:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.720 18:43:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.720 18:43:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.720 18:43:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.720 18:43:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:04.720 "name": "raid_bdev1", 00:12:04.720 "uuid": "a774cca3-2ad1-4a5a-94ec-c2b451dafcc4", 00:12:04.720 "strip_size_kb": 0, 00:12:04.720 "state": "online", 00:12:04.720 "raid_level": "raid1", 00:12:04.720 "superblock": true, 00:12:04.720 "num_base_bdevs": 4, 00:12:04.720 "num_base_bdevs_discovered": 3, 00:12:04.720 "num_base_bdevs_operational": 3, 00:12:04.720 "base_bdevs_list": [ 00:12:04.720 { 00:12:04.720 "name": "spare", 00:12:04.720 "uuid": "bbfa697c-00c5-56f4-b910-6d65d2fde527", 00:12:04.720 "is_configured": true, 00:12:04.720 "data_offset": 2048, 00:12:04.720 "data_size": 63488 00:12:04.720 }, 00:12:04.720 { 00:12:04.720 "name": null, 00:12:04.720 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:04.720 "is_configured": false, 00:12:04.720 "data_offset": 0, 00:12:04.720 "data_size": 63488 00:12:04.720 }, 00:12:04.720 { 00:12:04.720 "name": "BaseBdev3", 00:12:04.720 "uuid": "ddb6ff67-f5b1-55c5-8881-8a22591db29e", 00:12:04.720 "is_configured": true, 00:12:04.720 "data_offset": 2048, 00:12:04.720 "data_size": 63488 00:12:04.720 }, 00:12:04.720 { 00:12:04.720 "name": "BaseBdev4", 00:12:04.720 "uuid": "ce4fe721-9dd5-5924-ad1d-a03e800b08e9", 00:12:04.720 "is_configured": true, 00:12:04.720 "data_offset": 2048, 00:12:04.720 "data_size": 63488 00:12:04.720 } 00:12:04.720 ] 00:12:04.720 }' 00:12:04.720 18:43:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:04.720 18:43:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.295 18:43:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:05.295 18:43:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.295 18:43:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.295 [2024-12-15 18:43:05.489112] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:05.295 [2024-12-15 18:43:05.489198] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:05.295 [2024-12-15 18:43:05.489314] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:05.295 [2024-12-15 18:43:05.489401] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:05.295 [2024-12-15 18:43:05.489417] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:12:05.295 18:43:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.295 18:43:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:05.295 18:43:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:12:05.295 18:43:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.295 18:43:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.295 18:43:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.295 18:43:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:05.295 18:43:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:05.295 18:43:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:12:05.295 18:43:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:12:05.295 18:43:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:05.295 18:43:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:12:05.295 18:43:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:05.295 18:43:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:05.295 18:43:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:05.295 18:43:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:12:05.295 18:43:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:05.295 18:43:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:05.295 18:43:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:12:05.295 /dev/nbd0 00:12:05.554 18:43:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:05.554 18:43:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:05.554 18:43:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:05.554 18:43:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:12:05.554 18:43:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:05.554 18:43:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:05.554 18:43:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:05.554 18:43:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:12:05.554 18:43:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:05.554 18:43:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:05.554 18:43:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:05.554 1+0 records in 00:12:05.554 1+0 records out 00:12:05.554 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000255823 s, 16.0 MB/s 00:12:05.554 18:43:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:05.554 18:43:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:12:05.554 18:43:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:05.554 18:43:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:05.554 18:43:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:12:05.554 18:43:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:05.554 18:43:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:05.554 18:43:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:12:05.554 /dev/nbd1 00:12:05.825 18:43:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:05.825 18:43:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:05.825 18:43:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:12:05.825 18:43:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:12:05.825 18:43:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:05.825 18:43:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:05.825 18:43:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:12:05.825 18:43:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:12:05.825 18:43:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:05.825 18:43:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:05.825 18:43:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:05.825 1+0 records in 00:12:05.825 1+0 records out 00:12:05.825 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000329882 s, 12.4 MB/s 00:12:05.825 18:43:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:05.825 18:43:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:12:05.825 18:43:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:05.825 18:43:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:05.825 18:43:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:12:05.825 18:43:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:05.825 18:43:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:05.825 18:43:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:12:05.825 18:43:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:12:05.825 18:43:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:05.825 18:43:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:05.825 18:43:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:05.825 18:43:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:12:05.825 18:43:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:05.825 18:43:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:06.100 18:43:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:06.100 18:43:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:06.100 18:43:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:06.100 18:43:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:06.100 18:43:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:06.100 18:43:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:06.100 18:43:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:12:06.100 18:43:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:12:06.100 18:43:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:06.100 18:43:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:06.360 18:43:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:06.360 18:43:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:06.360 18:43:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:06.360 18:43:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:06.360 18:43:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:06.360 18:43:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:06.360 18:43:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:12:06.360 18:43:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:12:06.360 18:43:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:12:06.360 18:43:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:12:06.360 18:43:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.360 18:43:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.360 18:43:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.360 18:43:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:06.360 18:43:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.360 18:43:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.360 [2024-12-15 18:43:06.591905] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:06.360 [2024-12-15 18:43:06.591957] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:06.360 [2024-12-15 18:43:06.591978] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:12:06.360 [2024-12-15 18:43:06.591991] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:06.360 [2024-12-15 18:43:06.594535] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:06.360 [2024-12-15 18:43:06.594575] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:06.360 [2024-12-15 18:43:06.594660] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:12:06.360 [2024-12-15 18:43:06.594734] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:06.360 [2024-12-15 18:43:06.594876] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:06.360 [2024-12-15 18:43:06.594967] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:06.360 spare 00:12:06.360 18:43:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.360 18:43:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:12:06.360 18:43:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.360 18:43:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.360 [2024-12-15 18:43:06.694855] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:12:06.360 [2024-12-15 18:43:06.694893] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:06.360 [2024-12-15 18:43:06.695177] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ae0 00:12:06.360 [2024-12-15 18:43:06.695339] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:12:06.360 [2024-12-15 18:43:06.695354] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006600 00:12:06.360 [2024-12-15 18:43:06.695507] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:06.360 18:43:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.360 18:43:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:06.360 18:43:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:06.360 18:43:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:06.360 18:43:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:06.360 18:43:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:06.360 18:43:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:06.360 18:43:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:06.360 18:43:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:06.360 18:43:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:06.360 18:43:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:06.360 18:43:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:06.360 18:43:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:06.360 18:43:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.360 18:43:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.360 18:43:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.360 18:43:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:06.360 "name": "raid_bdev1", 00:12:06.360 "uuid": "a774cca3-2ad1-4a5a-94ec-c2b451dafcc4", 00:12:06.360 "strip_size_kb": 0, 00:12:06.360 "state": "online", 00:12:06.360 "raid_level": "raid1", 00:12:06.360 "superblock": true, 00:12:06.360 "num_base_bdevs": 4, 00:12:06.361 "num_base_bdevs_discovered": 3, 00:12:06.361 "num_base_bdevs_operational": 3, 00:12:06.361 "base_bdevs_list": [ 00:12:06.361 { 00:12:06.361 "name": "spare", 00:12:06.361 "uuid": "bbfa697c-00c5-56f4-b910-6d65d2fde527", 00:12:06.361 "is_configured": true, 00:12:06.361 "data_offset": 2048, 00:12:06.361 "data_size": 63488 00:12:06.361 }, 00:12:06.361 { 00:12:06.361 "name": null, 00:12:06.361 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:06.361 "is_configured": false, 00:12:06.361 "data_offset": 2048, 00:12:06.361 "data_size": 63488 00:12:06.361 }, 00:12:06.361 { 00:12:06.361 "name": "BaseBdev3", 00:12:06.361 "uuid": "ddb6ff67-f5b1-55c5-8881-8a22591db29e", 00:12:06.361 "is_configured": true, 00:12:06.361 "data_offset": 2048, 00:12:06.361 "data_size": 63488 00:12:06.361 }, 00:12:06.361 { 00:12:06.361 "name": "BaseBdev4", 00:12:06.361 "uuid": "ce4fe721-9dd5-5924-ad1d-a03e800b08e9", 00:12:06.361 "is_configured": true, 00:12:06.361 "data_offset": 2048, 00:12:06.361 "data_size": 63488 00:12:06.361 } 00:12:06.361 ] 00:12:06.361 }' 00:12:06.361 18:43:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:06.361 18:43:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.930 18:43:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:06.930 18:43:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:06.930 18:43:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:06.930 18:43:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:06.930 18:43:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:06.930 18:43:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:06.930 18:43:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:06.930 18:43:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.930 18:43:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.930 18:43:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.930 18:43:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:06.930 "name": "raid_bdev1", 00:12:06.930 "uuid": "a774cca3-2ad1-4a5a-94ec-c2b451dafcc4", 00:12:06.930 "strip_size_kb": 0, 00:12:06.930 "state": "online", 00:12:06.930 "raid_level": "raid1", 00:12:06.930 "superblock": true, 00:12:06.930 "num_base_bdevs": 4, 00:12:06.930 "num_base_bdevs_discovered": 3, 00:12:06.930 "num_base_bdevs_operational": 3, 00:12:06.930 "base_bdevs_list": [ 00:12:06.930 { 00:12:06.930 "name": "spare", 00:12:06.930 "uuid": "bbfa697c-00c5-56f4-b910-6d65d2fde527", 00:12:06.930 "is_configured": true, 00:12:06.930 "data_offset": 2048, 00:12:06.930 "data_size": 63488 00:12:06.930 }, 00:12:06.930 { 00:12:06.930 "name": null, 00:12:06.930 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:06.930 "is_configured": false, 00:12:06.930 "data_offset": 2048, 00:12:06.930 "data_size": 63488 00:12:06.930 }, 00:12:06.930 { 00:12:06.930 "name": "BaseBdev3", 00:12:06.930 "uuid": "ddb6ff67-f5b1-55c5-8881-8a22591db29e", 00:12:06.930 "is_configured": true, 00:12:06.930 "data_offset": 2048, 00:12:06.930 "data_size": 63488 00:12:06.930 }, 00:12:06.930 { 00:12:06.930 "name": "BaseBdev4", 00:12:06.930 "uuid": "ce4fe721-9dd5-5924-ad1d-a03e800b08e9", 00:12:06.930 "is_configured": true, 00:12:06.930 "data_offset": 2048, 00:12:06.930 "data_size": 63488 00:12:06.930 } 00:12:06.930 ] 00:12:06.930 }' 00:12:06.930 18:43:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:06.930 18:43:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:06.930 18:43:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:06.930 18:43:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:06.930 18:43:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:06.930 18:43:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:12:06.930 18:43:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.930 18:43:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.930 18:43:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.930 18:43:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:12:06.930 18:43:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:06.930 18:43:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.930 18:43:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.189 [2024-12-15 18:43:07.370752] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:07.189 18:43:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.189 18:43:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:07.190 18:43:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:07.190 18:43:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:07.190 18:43:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:07.190 18:43:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:07.190 18:43:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:07.190 18:43:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:07.190 18:43:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:07.190 18:43:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:07.190 18:43:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:07.190 18:43:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.190 18:43:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.190 18:43:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.190 18:43:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:07.190 18:43:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.190 18:43:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:07.190 "name": "raid_bdev1", 00:12:07.190 "uuid": "a774cca3-2ad1-4a5a-94ec-c2b451dafcc4", 00:12:07.190 "strip_size_kb": 0, 00:12:07.190 "state": "online", 00:12:07.190 "raid_level": "raid1", 00:12:07.190 "superblock": true, 00:12:07.190 "num_base_bdevs": 4, 00:12:07.190 "num_base_bdevs_discovered": 2, 00:12:07.190 "num_base_bdevs_operational": 2, 00:12:07.190 "base_bdevs_list": [ 00:12:07.190 { 00:12:07.190 "name": null, 00:12:07.190 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:07.190 "is_configured": false, 00:12:07.190 "data_offset": 0, 00:12:07.190 "data_size": 63488 00:12:07.190 }, 00:12:07.190 { 00:12:07.190 "name": null, 00:12:07.190 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:07.190 "is_configured": false, 00:12:07.190 "data_offset": 2048, 00:12:07.190 "data_size": 63488 00:12:07.190 }, 00:12:07.190 { 00:12:07.190 "name": "BaseBdev3", 00:12:07.190 "uuid": "ddb6ff67-f5b1-55c5-8881-8a22591db29e", 00:12:07.190 "is_configured": true, 00:12:07.190 "data_offset": 2048, 00:12:07.190 "data_size": 63488 00:12:07.190 }, 00:12:07.190 { 00:12:07.190 "name": "BaseBdev4", 00:12:07.190 "uuid": "ce4fe721-9dd5-5924-ad1d-a03e800b08e9", 00:12:07.190 "is_configured": true, 00:12:07.190 "data_offset": 2048, 00:12:07.190 "data_size": 63488 00:12:07.190 } 00:12:07.190 ] 00:12:07.190 }' 00:12:07.190 18:43:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:07.190 18:43:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.449 18:43:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:07.449 18:43:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.449 18:43:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.449 [2024-12-15 18:43:07.790065] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:07.449 [2024-12-15 18:43:07.790260] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:12:07.450 [2024-12-15 18:43:07.790285] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:12:07.450 [2024-12-15 18:43:07.790335] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:07.450 [2024-12-15 18:43:07.794351] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1bb0 00:12:07.450 18:43:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.450 18:43:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:12:07.450 [2024-12-15 18:43:07.796311] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:08.389 18:43:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:08.389 18:43:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:08.389 18:43:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:08.389 18:43:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:08.389 18:43:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:08.389 18:43:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.389 18:43:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:08.389 18:43:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.389 18:43:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.389 18:43:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.650 18:43:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:08.650 "name": "raid_bdev1", 00:12:08.650 "uuid": "a774cca3-2ad1-4a5a-94ec-c2b451dafcc4", 00:12:08.650 "strip_size_kb": 0, 00:12:08.650 "state": "online", 00:12:08.650 "raid_level": "raid1", 00:12:08.650 "superblock": true, 00:12:08.650 "num_base_bdevs": 4, 00:12:08.650 "num_base_bdevs_discovered": 3, 00:12:08.650 "num_base_bdevs_operational": 3, 00:12:08.650 "process": { 00:12:08.650 "type": "rebuild", 00:12:08.650 "target": "spare", 00:12:08.650 "progress": { 00:12:08.650 "blocks": 20480, 00:12:08.650 "percent": 32 00:12:08.650 } 00:12:08.650 }, 00:12:08.650 "base_bdevs_list": [ 00:12:08.650 { 00:12:08.650 "name": "spare", 00:12:08.650 "uuid": "bbfa697c-00c5-56f4-b910-6d65d2fde527", 00:12:08.650 "is_configured": true, 00:12:08.650 "data_offset": 2048, 00:12:08.650 "data_size": 63488 00:12:08.650 }, 00:12:08.650 { 00:12:08.650 "name": null, 00:12:08.650 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:08.650 "is_configured": false, 00:12:08.650 "data_offset": 2048, 00:12:08.650 "data_size": 63488 00:12:08.650 }, 00:12:08.650 { 00:12:08.650 "name": "BaseBdev3", 00:12:08.650 "uuid": "ddb6ff67-f5b1-55c5-8881-8a22591db29e", 00:12:08.650 "is_configured": true, 00:12:08.650 "data_offset": 2048, 00:12:08.650 "data_size": 63488 00:12:08.650 }, 00:12:08.650 { 00:12:08.650 "name": "BaseBdev4", 00:12:08.650 "uuid": "ce4fe721-9dd5-5924-ad1d-a03e800b08e9", 00:12:08.650 "is_configured": true, 00:12:08.650 "data_offset": 2048, 00:12:08.650 "data_size": 63488 00:12:08.650 } 00:12:08.650 ] 00:12:08.650 }' 00:12:08.650 18:43:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:08.650 18:43:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:08.650 18:43:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:08.650 18:43:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:08.650 18:43:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:12:08.650 18:43:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.650 18:43:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.650 [2024-12-15 18:43:08.957140] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:08.650 [2024-12-15 18:43:09.000977] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:08.650 [2024-12-15 18:43:09.001036] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:08.650 [2024-12-15 18:43:09.001051] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:08.650 [2024-12-15 18:43:09.001060] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:08.650 18:43:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.650 18:43:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:08.650 18:43:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:08.650 18:43:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:08.650 18:43:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:08.650 18:43:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:08.650 18:43:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:08.650 18:43:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:08.650 18:43:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:08.650 18:43:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:08.650 18:43:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:08.650 18:43:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.650 18:43:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.650 18:43:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.650 18:43:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:08.650 18:43:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.650 18:43:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:08.650 "name": "raid_bdev1", 00:12:08.650 "uuid": "a774cca3-2ad1-4a5a-94ec-c2b451dafcc4", 00:12:08.650 "strip_size_kb": 0, 00:12:08.650 "state": "online", 00:12:08.650 "raid_level": "raid1", 00:12:08.650 "superblock": true, 00:12:08.650 "num_base_bdevs": 4, 00:12:08.650 "num_base_bdevs_discovered": 2, 00:12:08.650 "num_base_bdevs_operational": 2, 00:12:08.650 "base_bdevs_list": [ 00:12:08.650 { 00:12:08.650 "name": null, 00:12:08.650 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:08.650 "is_configured": false, 00:12:08.650 "data_offset": 0, 00:12:08.650 "data_size": 63488 00:12:08.650 }, 00:12:08.650 { 00:12:08.650 "name": null, 00:12:08.650 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:08.650 "is_configured": false, 00:12:08.650 "data_offset": 2048, 00:12:08.650 "data_size": 63488 00:12:08.650 }, 00:12:08.650 { 00:12:08.650 "name": "BaseBdev3", 00:12:08.650 "uuid": "ddb6ff67-f5b1-55c5-8881-8a22591db29e", 00:12:08.650 "is_configured": true, 00:12:08.650 "data_offset": 2048, 00:12:08.650 "data_size": 63488 00:12:08.650 }, 00:12:08.650 { 00:12:08.650 "name": "BaseBdev4", 00:12:08.650 "uuid": "ce4fe721-9dd5-5924-ad1d-a03e800b08e9", 00:12:08.650 "is_configured": true, 00:12:08.650 "data_offset": 2048, 00:12:08.650 "data_size": 63488 00:12:08.650 } 00:12:08.650 ] 00:12:08.650 }' 00:12:08.650 18:43:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:08.650 18:43:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.220 18:43:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:09.220 18:43:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.220 18:43:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.220 [2024-12-15 18:43:09.440654] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:09.220 [2024-12-15 18:43:09.440727] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:09.220 [2024-12-15 18:43:09.440755] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:12:09.220 [2024-12-15 18:43:09.440767] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:09.220 [2024-12-15 18:43:09.441227] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:09.220 [2024-12-15 18:43:09.441257] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:09.220 [2024-12-15 18:43:09.441348] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:12:09.220 [2024-12-15 18:43:09.441366] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:12:09.220 [2024-12-15 18:43:09.441376] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:12:09.220 [2024-12-15 18:43:09.441407] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:09.220 [2024-12-15 18:43:09.445425] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:12:09.220 spare 00:12:09.220 18:43:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.220 18:43:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:12:09.220 [2024-12-15 18:43:09.447262] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:10.159 18:43:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:10.159 18:43:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:10.159 18:43:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:10.159 18:43:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:10.159 18:43:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:10.159 18:43:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:10.159 18:43:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.159 18:43:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:10.159 18:43:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.159 18:43:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.159 18:43:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:10.159 "name": "raid_bdev1", 00:12:10.159 "uuid": "a774cca3-2ad1-4a5a-94ec-c2b451dafcc4", 00:12:10.159 "strip_size_kb": 0, 00:12:10.159 "state": "online", 00:12:10.159 "raid_level": "raid1", 00:12:10.159 "superblock": true, 00:12:10.159 "num_base_bdevs": 4, 00:12:10.159 "num_base_bdevs_discovered": 3, 00:12:10.159 "num_base_bdevs_operational": 3, 00:12:10.159 "process": { 00:12:10.159 "type": "rebuild", 00:12:10.159 "target": "spare", 00:12:10.159 "progress": { 00:12:10.159 "blocks": 20480, 00:12:10.159 "percent": 32 00:12:10.159 } 00:12:10.159 }, 00:12:10.159 "base_bdevs_list": [ 00:12:10.159 { 00:12:10.159 "name": "spare", 00:12:10.159 "uuid": "bbfa697c-00c5-56f4-b910-6d65d2fde527", 00:12:10.159 "is_configured": true, 00:12:10.159 "data_offset": 2048, 00:12:10.159 "data_size": 63488 00:12:10.159 }, 00:12:10.159 { 00:12:10.159 "name": null, 00:12:10.159 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:10.159 "is_configured": false, 00:12:10.159 "data_offset": 2048, 00:12:10.159 "data_size": 63488 00:12:10.159 }, 00:12:10.159 { 00:12:10.159 "name": "BaseBdev3", 00:12:10.159 "uuid": "ddb6ff67-f5b1-55c5-8881-8a22591db29e", 00:12:10.159 "is_configured": true, 00:12:10.159 "data_offset": 2048, 00:12:10.159 "data_size": 63488 00:12:10.159 }, 00:12:10.159 { 00:12:10.159 "name": "BaseBdev4", 00:12:10.159 "uuid": "ce4fe721-9dd5-5924-ad1d-a03e800b08e9", 00:12:10.159 "is_configured": true, 00:12:10.159 "data_offset": 2048, 00:12:10.159 "data_size": 63488 00:12:10.159 } 00:12:10.159 ] 00:12:10.159 }' 00:12:10.159 18:43:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:10.159 18:43:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:10.159 18:43:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:10.159 18:43:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:10.159 18:43:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:12:10.159 18:43:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.159 18:43:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.159 [2024-12-15 18:43:10.591401] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:10.419 [2024-12-15 18:43:10.651773] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:10.419 [2024-12-15 18:43:10.651838] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:10.419 [2024-12-15 18:43:10.651855] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:10.419 [2024-12-15 18:43:10.651862] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:10.419 18:43:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.419 18:43:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:10.419 18:43:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:10.419 18:43:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:10.419 18:43:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:10.419 18:43:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:10.419 18:43:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:10.419 18:43:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:10.419 18:43:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:10.419 18:43:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:10.419 18:43:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:10.419 18:43:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:10.419 18:43:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:10.419 18:43:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.419 18:43:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.419 18:43:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.419 18:43:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:10.419 "name": "raid_bdev1", 00:12:10.419 "uuid": "a774cca3-2ad1-4a5a-94ec-c2b451dafcc4", 00:12:10.419 "strip_size_kb": 0, 00:12:10.419 "state": "online", 00:12:10.419 "raid_level": "raid1", 00:12:10.419 "superblock": true, 00:12:10.419 "num_base_bdevs": 4, 00:12:10.419 "num_base_bdevs_discovered": 2, 00:12:10.419 "num_base_bdevs_operational": 2, 00:12:10.419 "base_bdevs_list": [ 00:12:10.419 { 00:12:10.419 "name": null, 00:12:10.419 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:10.419 "is_configured": false, 00:12:10.419 "data_offset": 0, 00:12:10.419 "data_size": 63488 00:12:10.419 }, 00:12:10.419 { 00:12:10.419 "name": null, 00:12:10.419 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:10.419 "is_configured": false, 00:12:10.419 "data_offset": 2048, 00:12:10.419 "data_size": 63488 00:12:10.419 }, 00:12:10.419 { 00:12:10.419 "name": "BaseBdev3", 00:12:10.419 "uuid": "ddb6ff67-f5b1-55c5-8881-8a22591db29e", 00:12:10.419 "is_configured": true, 00:12:10.419 "data_offset": 2048, 00:12:10.419 "data_size": 63488 00:12:10.419 }, 00:12:10.419 { 00:12:10.419 "name": "BaseBdev4", 00:12:10.419 "uuid": "ce4fe721-9dd5-5924-ad1d-a03e800b08e9", 00:12:10.419 "is_configured": true, 00:12:10.419 "data_offset": 2048, 00:12:10.419 "data_size": 63488 00:12:10.419 } 00:12:10.419 ] 00:12:10.419 }' 00:12:10.419 18:43:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:10.419 18:43:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.988 18:43:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:10.988 18:43:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:10.988 18:43:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:10.988 18:43:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:10.988 18:43:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:10.988 18:43:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:10.988 18:43:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:10.988 18:43:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.988 18:43:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.988 18:43:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.988 18:43:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:10.988 "name": "raid_bdev1", 00:12:10.988 "uuid": "a774cca3-2ad1-4a5a-94ec-c2b451dafcc4", 00:12:10.988 "strip_size_kb": 0, 00:12:10.988 "state": "online", 00:12:10.988 "raid_level": "raid1", 00:12:10.988 "superblock": true, 00:12:10.988 "num_base_bdevs": 4, 00:12:10.988 "num_base_bdevs_discovered": 2, 00:12:10.988 "num_base_bdevs_operational": 2, 00:12:10.988 "base_bdevs_list": [ 00:12:10.988 { 00:12:10.988 "name": null, 00:12:10.988 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:10.988 "is_configured": false, 00:12:10.988 "data_offset": 0, 00:12:10.988 "data_size": 63488 00:12:10.988 }, 00:12:10.988 { 00:12:10.988 "name": null, 00:12:10.988 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:10.988 "is_configured": false, 00:12:10.988 "data_offset": 2048, 00:12:10.989 "data_size": 63488 00:12:10.989 }, 00:12:10.989 { 00:12:10.989 "name": "BaseBdev3", 00:12:10.989 "uuid": "ddb6ff67-f5b1-55c5-8881-8a22591db29e", 00:12:10.989 "is_configured": true, 00:12:10.989 "data_offset": 2048, 00:12:10.989 "data_size": 63488 00:12:10.989 }, 00:12:10.989 { 00:12:10.989 "name": "BaseBdev4", 00:12:10.989 "uuid": "ce4fe721-9dd5-5924-ad1d-a03e800b08e9", 00:12:10.989 "is_configured": true, 00:12:10.989 "data_offset": 2048, 00:12:10.989 "data_size": 63488 00:12:10.989 } 00:12:10.989 ] 00:12:10.989 }' 00:12:10.989 18:43:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:10.989 18:43:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:10.989 18:43:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:10.989 18:43:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:10.989 18:43:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:12:10.989 18:43:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.989 18:43:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.989 18:43:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.989 18:43:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:10.989 18:43:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.989 18:43:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.989 [2024-12-15 18:43:11.279102] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:10.989 [2024-12-15 18:43:11.279159] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:10.989 [2024-12-15 18:43:11.279184] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:12:10.989 [2024-12-15 18:43:11.279193] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:10.989 [2024-12-15 18:43:11.279605] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:10.989 [2024-12-15 18:43:11.279623] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:10.989 [2024-12-15 18:43:11.279697] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:12:10.989 [2024-12-15 18:43:11.279710] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:12:10.989 [2024-12-15 18:43:11.279719] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:12:10.989 [2024-12-15 18:43:11.279729] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:12:10.989 BaseBdev1 00:12:10.989 18:43:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.989 18:43:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:12:11.927 18:43:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:11.927 18:43:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:11.927 18:43:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:11.927 18:43:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:11.927 18:43:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:11.927 18:43:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:11.927 18:43:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:11.927 18:43:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:11.927 18:43:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:11.927 18:43:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:11.927 18:43:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:11.927 18:43:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.927 18:43:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:11.927 18:43:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.927 18:43:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.927 18:43:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:11.927 "name": "raid_bdev1", 00:12:11.927 "uuid": "a774cca3-2ad1-4a5a-94ec-c2b451dafcc4", 00:12:11.927 "strip_size_kb": 0, 00:12:11.927 "state": "online", 00:12:11.927 "raid_level": "raid1", 00:12:11.927 "superblock": true, 00:12:11.927 "num_base_bdevs": 4, 00:12:11.927 "num_base_bdevs_discovered": 2, 00:12:11.927 "num_base_bdevs_operational": 2, 00:12:11.927 "base_bdevs_list": [ 00:12:11.927 { 00:12:11.927 "name": null, 00:12:11.927 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:11.927 "is_configured": false, 00:12:11.927 "data_offset": 0, 00:12:11.927 "data_size": 63488 00:12:11.927 }, 00:12:11.927 { 00:12:11.927 "name": null, 00:12:11.927 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:11.927 "is_configured": false, 00:12:11.927 "data_offset": 2048, 00:12:11.927 "data_size": 63488 00:12:11.927 }, 00:12:11.927 { 00:12:11.927 "name": "BaseBdev3", 00:12:11.927 "uuid": "ddb6ff67-f5b1-55c5-8881-8a22591db29e", 00:12:11.927 "is_configured": true, 00:12:11.927 "data_offset": 2048, 00:12:11.927 "data_size": 63488 00:12:11.927 }, 00:12:11.927 { 00:12:11.927 "name": "BaseBdev4", 00:12:11.927 "uuid": "ce4fe721-9dd5-5924-ad1d-a03e800b08e9", 00:12:11.927 "is_configured": true, 00:12:11.927 "data_offset": 2048, 00:12:11.927 "data_size": 63488 00:12:11.927 } 00:12:11.927 ] 00:12:11.927 }' 00:12:11.927 18:43:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:11.927 18:43:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.497 18:43:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:12.497 18:43:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:12.497 18:43:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:12.497 18:43:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:12.497 18:43:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:12.497 18:43:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:12.497 18:43:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:12.497 18:43:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.497 18:43:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.497 18:43:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.497 18:43:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:12.497 "name": "raid_bdev1", 00:12:12.497 "uuid": "a774cca3-2ad1-4a5a-94ec-c2b451dafcc4", 00:12:12.497 "strip_size_kb": 0, 00:12:12.497 "state": "online", 00:12:12.497 "raid_level": "raid1", 00:12:12.497 "superblock": true, 00:12:12.497 "num_base_bdevs": 4, 00:12:12.497 "num_base_bdevs_discovered": 2, 00:12:12.497 "num_base_bdevs_operational": 2, 00:12:12.497 "base_bdevs_list": [ 00:12:12.497 { 00:12:12.497 "name": null, 00:12:12.497 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:12.497 "is_configured": false, 00:12:12.497 "data_offset": 0, 00:12:12.497 "data_size": 63488 00:12:12.497 }, 00:12:12.497 { 00:12:12.497 "name": null, 00:12:12.497 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:12.497 "is_configured": false, 00:12:12.497 "data_offset": 2048, 00:12:12.497 "data_size": 63488 00:12:12.497 }, 00:12:12.497 { 00:12:12.497 "name": "BaseBdev3", 00:12:12.497 "uuid": "ddb6ff67-f5b1-55c5-8881-8a22591db29e", 00:12:12.497 "is_configured": true, 00:12:12.497 "data_offset": 2048, 00:12:12.497 "data_size": 63488 00:12:12.497 }, 00:12:12.497 { 00:12:12.497 "name": "BaseBdev4", 00:12:12.497 "uuid": "ce4fe721-9dd5-5924-ad1d-a03e800b08e9", 00:12:12.497 "is_configured": true, 00:12:12.497 "data_offset": 2048, 00:12:12.497 "data_size": 63488 00:12:12.497 } 00:12:12.497 ] 00:12:12.497 }' 00:12:12.497 18:43:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:12.497 18:43:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:12.497 18:43:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:12.497 18:43:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:12.497 18:43:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:12.497 18:43:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:12:12.497 18:43:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:12.497 18:43:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:12:12.497 18:43:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:12.497 18:43:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:12:12.497 18:43:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:12.497 18:43:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:12.497 18:43:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.497 18:43:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.497 [2024-12-15 18:43:12.892558] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:12.497 [2024-12-15 18:43:12.892721] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:12:12.497 [2024-12-15 18:43:12.892740] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:12:12.497 request: 00:12:12.497 { 00:12:12.497 "base_bdev": "BaseBdev1", 00:12:12.497 "raid_bdev": "raid_bdev1", 00:12:12.497 "method": "bdev_raid_add_base_bdev", 00:12:12.497 "req_id": 1 00:12:12.497 } 00:12:12.497 Got JSON-RPC error response 00:12:12.497 response: 00:12:12.497 { 00:12:12.497 "code": -22, 00:12:12.497 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:12:12.497 } 00:12:12.497 18:43:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:12:12.497 18:43:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:12:12.497 18:43:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:12.497 18:43:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:12.497 18:43:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:12.497 18:43:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:12:13.879 18:43:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:13.879 18:43:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:13.879 18:43:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:13.879 18:43:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:13.879 18:43:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:13.879 18:43:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:13.879 18:43:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:13.879 18:43:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:13.879 18:43:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:13.879 18:43:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:13.879 18:43:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:13.879 18:43:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:13.879 18:43:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.879 18:43:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.880 18:43:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.880 18:43:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:13.880 "name": "raid_bdev1", 00:12:13.880 "uuid": "a774cca3-2ad1-4a5a-94ec-c2b451dafcc4", 00:12:13.880 "strip_size_kb": 0, 00:12:13.880 "state": "online", 00:12:13.880 "raid_level": "raid1", 00:12:13.880 "superblock": true, 00:12:13.880 "num_base_bdevs": 4, 00:12:13.880 "num_base_bdevs_discovered": 2, 00:12:13.880 "num_base_bdevs_operational": 2, 00:12:13.880 "base_bdevs_list": [ 00:12:13.880 { 00:12:13.880 "name": null, 00:12:13.880 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:13.880 "is_configured": false, 00:12:13.880 "data_offset": 0, 00:12:13.880 "data_size": 63488 00:12:13.880 }, 00:12:13.880 { 00:12:13.880 "name": null, 00:12:13.880 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:13.880 "is_configured": false, 00:12:13.880 "data_offset": 2048, 00:12:13.880 "data_size": 63488 00:12:13.880 }, 00:12:13.880 { 00:12:13.880 "name": "BaseBdev3", 00:12:13.880 "uuid": "ddb6ff67-f5b1-55c5-8881-8a22591db29e", 00:12:13.880 "is_configured": true, 00:12:13.880 "data_offset": 2048, 00:12:13.880 "data_size": 63488 00:12:13.880 }, 00:12:13.880 { 00:12:13.880 "name": "BaseBdev4", 00:12:13.880 "uuid": "ce4fe721-9dd5-5924-ad1d-a03e800b08e9", 00:12:13.880 "is_configured": true, 00:12:13.880 "data_offset": 2048, 00:12:13.880 "data_size": 63488 00:12:13.880 } 00:12:13.880 ] 00:12:13.880 }' 00:12:13.880 18:43:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:13.880 18:43:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.139 18:43:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:14.139 18:43:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:14.139 18:43:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:14.139 18:43:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:14.139 18:43:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:14.139 18:43:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:14.139 18:43:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:14.139 18:43:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.139 18:43:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.139 18:43:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.139 18:43:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:14.139 "name": "raid_bdev1", 00:12:14.139 "uuid": "a774cca3-2ad1-4a5a-94ec-c2b451dafcc4", 00:12:14.139 "strip_size_kb": 0, 00:12:14.139 "state": "online", 00:12:14.139 "raid_level": "raid1", 00:12:14.139 "superblock": true, 00:12:14.139 "num_base_bdevs": 4, 00:12:14.139 "num_base_bdevs_discovered": 2, 00:12:14.139 "num_base_bdevs_operational": 2, 00:12:14.139 "base_bdevs_list": [ 00:12:14.139 { 00:12:14.139 "name": null, 00:12:14.139 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:14.139 "is_configured": false, 00:12:14.139 "data_offset": 0, 00:12:14.139 "data_size": 63488 00:12:14.139 }, 00:12:14.139 { 00:12:14.139 "name": null, 00:12:14.139 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:14.139 "is_configured": false, 00:12:14.139 "data_offset": 2048, 00:12:14.139 "data_size": 63488 00:12:14.139 }, 00:12:14.139 { 00:12:14.139 "name": "BaseBdev3", 00:12:14.139 "uuid": "ddb6ff67-f5b1-55c5-8881-8a22591db29e", 00:12:14.139 "is_configured": true, 00:12:14.139 "data_offset": 2048, 00:12:14.139 "data_size": 63488 00:12:14.139 }, 00:12:14.140 { 00:12:14.140 "name": "BaseBdev4", 00:12:14.140 "uuid": "ce4fe721-9dd5-5924-ad1d-a03e800b08e9", 00:12:14.140 "is_configured": true, 00:12:14.140 "data_offset": 2048, 00:12:14.140 "data_size": 63488 00:12:14.140 } 00:12:14.140 ] 00:12:14.140 }' 00:12:14.140 18:43:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:14.140 18:43:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:14.140 18:43:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:14.140 18:43:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:14.140 18:43:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 90548 00:12:14.140 18:43:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 90548 ']' 00:12:14.140 18:43:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 90548 00:12:14.140 18:43:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:12:14.140 18:43:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:14.140 18:43:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90548 00:12:14.140 18:43:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:14.140 18:43:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:14.140 killing process with pid 90548 00:12:14.140 18:43:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90548' 00:12:14.140 18:43:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 90548 00:12:14.140 Received shutdown signal, test time was about 60.000000 seconds 00:12:14.140 00:12:14.140 Latency(us) 00:12:14.140 [2024-12-15T18:43:14.581Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:14.140 [2024-12-15T18:43:14.581Z] =================================================================================================================== 00:12:14.140 [2024-12-15T18:43:14.581Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:14.140 [2024-12-15 18:43:14.536781] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:14.140 [2024-12-15 18:43:14.536938] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:14.140 18:43:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 90548 00:12:14.140 [2024-12-15 18:43:14.537010] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:14.140 [2024-12-15 18:43:14.537023] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state offline 00:12:14.399 [2024-12-15 18:43:14.588650] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:14.399 18:43:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:12:14.399 00:12:14.399 real 0m23.578s 00:12:14.399 user 0m28.593s 00:12:14.399 sys 0m3.782s 00:12:14.399 18:43:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:14.399 18:43:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.399 ************************************ 00:12:14.399 END TEST raid_rebuild_test_sb 00:12:14.399 ************************************ 00:12:14.672 18:43:14 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:12:14.672 18:43:14 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:12:14.672 18:43:14 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:14.672 18:43:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:14.672 ************************************ 00:12:14.672 START TEST raid_rebuild_test_io 00:12:14.672 ************************************ 00:12:14.672 18:43:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false true true 00:12:14.672 18:43:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:14.672 18:43:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:12:14.672 18:43:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:12:14.672 18:43:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:12:14.672 18:43:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:14.672 18:43:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:14.672 18:43:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:14.672 18:43:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:14.672 18:43:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:14.672 18:43:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:14.672 18:43:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:14.672 18:43:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:14.672 18:43:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:14.672 18:43:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:12:14.672 18:43:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:14.672 18:43:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:14.672 18:43:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:12:14.672 18:43:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:14.673 18:43:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:14.673 18:43:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:14.673 18:43:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:14.673 18:43:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:14.673 18:43:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:14.673 18:43:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:14.673 18:43:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:14.673 18:43:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:14.673 18:43:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:14.673 18:43:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:14.673 18:43:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:12:14.673 18:43:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=91285 00:12:14.673 18:43:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:14.673 18:43:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 91285 00:12:14.673 18:43:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 91285 ']' 00:12:14.673 18:43:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:14.673 18:43:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:14.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:14.673 18:43:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:14.673 18:43:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:14.673 18:43:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:14.673 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:14.673 Zero copy mechanism will not be used. 00:12:14.673 [2024-12-15 18:43:14.972237] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:12:14.673 [2024-12-15 18:43:14.972357] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91285 ] 00:12:14.946 [2024-12-15 18:43:15.143264] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:14.946 [2024-12-15 18:43:15.170120] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:12:14.946 [2024-12-15 18:43:15.212623] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:14.946 [2024-12-15 18:43:15.212666] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:15.515 18:43:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:15.515 18:43:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:12:15.515 18:43:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:15.515 18:43:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:15.515 18:43:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.515 18:43:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:15.515 BaseBdev1_malloc 00:12:15.515 18:43:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.515 18:43:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:15.515 18:43:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.515 18:43:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:15.515 [2024-12-15 18:43:15.824355] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:15.515 [2024-12-15 18:43:15.824415] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:15.515 [2024-12-15 18:43:15.824447] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:15.515 [2024-12-15 18:43:15.824460] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:15.515 [2024-12-15 18:43:15.826561] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:15.515 [2024-12-15 18:43:15.826597] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:15.515 BaseBdev1 00:12:15.515 18:43:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.515 18:43:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:15.516 18:43:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:15.516 18:43:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.516 18:43:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:15.516 BaseBdev2_malloc 00:12:15.516 18:43:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.516 18:43:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:15.516 18:43:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.516 18:43:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:15.516 [2024-12-15 18:43:15.852988] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:15.516 [2024-12-15 18:43:15.853036] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:15.516 [2024-12-15 18:43:15.853055] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:15.516 [2024-12-15 18:43:15.853064] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:15.516 [2024-12-15 18:43:15.855039] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:15.516 [2024-12-15 18:43:15.855072] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:15.516 BaseBdev2 00:12:15.516 18:43:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.516 18:43:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:15.516 18:43:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:15.516 18:43:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.516 18:43:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:15.516 BaseBdev3_malloc 00:12:15.516 18:43:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.516 18:43:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:12:15.516 18:43:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.516 18:43:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:15.516 [2024-12-15 18:43:15.881533] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:12:15.516 [2024-12-15 18:43:15.881582] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:15.516 [2024-12-15 18:43:15.881605] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:15.516 [2024-12-15 18:43:15.881614] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:15.516 [2024-12-15 18:43:15.883559] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:15.516 [2024-12-15 18:43:15.883594] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:15.516 BaseBdev3 00:12:15.516 18:43:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.516 18:43:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:15.516 18:43:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:15.516 18:43:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.516 18:43:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:15.516 BaseBdev4_malloc 00:12:15.516 18:43:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.516 18:43:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:12:15.516 18:43:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.516 18:43:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:15.516 [2024-12-15 18:43:15.921530] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:12:15.516 [2024-12-15 18:43:15.921585] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:15.516 [2024-12-15 18:43:15.921611] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:12:15.516 [2024-12-15 18:43:15.921620] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:15.516 [2024-12-15 18:43:15.923698] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:15.516 [2024-12-15 18:43:15.923732] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:15.516 BaseBdev4 00:12:15.516 18:43:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.516 18:43:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:15.516 18:43:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.516 18:43:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:15.516 spare_malloc 00:12:15.516 18:43:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.516 18:43:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:15.516 18:43:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.516 18:43:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:15.776 spare_delay 00:12:15.776 18:43:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.776 18:43:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:15.776 18:43:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.776 18:43:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:15.776 [2024-12-15 18:43:15.962064] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:15.776 [2024-12-15 18:43:15.962111] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:15.776 [2024-12-15 18:43:15.962129] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:12:15.776 [2024-12-15 18:43:15.962138] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:15.776 [2024-12-15 18:43:15.964172] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:15.776 [2024-12-15 18:43:15.964293] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:15.776 spare 00:12:15.776 18:43:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.776 18:43:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:12:15.776 18:43:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.776 18:43:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:15.776 [2024-12-15 18:43:15.974101] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:15.776 [2024-12-15 18:43:15.975807] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:15.776 [2024-12-15 18:43:15.975885] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:15.776 [2024-12-15 18:43:15.975926] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:15.776 [2024-12-15 18:43:15.975997] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:12:15.776 [2024-12-15 18:43:15.976014] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:15.777 [2024-12-15 18:43:15.976262] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:15.777 [2024-12-15 18:43:15.976400] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:12:15.777 [2024-12-15 18:43:15.976412] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:12:15.777 [2024-12-15 18:43:15.976552] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:15.777 18:43:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.777 18:43:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:15.777 18:43:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:15.777 18:43:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:15.777 18:43:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:15.777 18:43:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:15.777 18:43:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:15.777 18:43:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:15.777 18:43:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:15.777 18:43:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:15.777 18:43:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:15.777 18:43:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:15.777 18:43:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.777 18:43:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.777 18:43:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:15.777 18:43:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.777 18:43:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:15.777 "name": "raid_bdev1", 00:12:15.777 "uuid": "7c4a9024-6657-412f-9082-14bd0af47552", 00:12:15.777 "strip_size_kb": 0, 00:12:15.777 "state": "online", 00:12:15.777 "raid_level": "raid1", 00:12:15.777 "superblock": false, 00:12:15.777 "num_base_bdevs": 4, 00:12:15.777 "num_base_bdevs_discovered": 4, 00:12:15.777 "num_base_bdevs_operational": 4, 00:12:15.777 "base_bdevs_list": [ 00:12:15.777 { 00:12:15.777 "name": "BaseBdev1", 00:12:15.777 "uuid": "735bfa62-67d7-5c54-8184-da59a3034757", 00:12:15.777 "is_configured": true, 00:12:15.777 "data_offset": 0, 00:12:15.777 "data_size": 65536 00:12:15.777 }, 00:12:15.777 { 00:12:15.777 "name": "BaseBdev2", 00:12:15.777 "uuid": "aac58558-4bcc-555c-afc9-e12d032c50c7", 00:12:15.777 "is_configured": true, 00:12:15.777 "data_offset": 0, 00:12:15.777 "data_size": 65536 00:12:15.777 }, 00:12:15.777 { 00:12:15.777 "name": "BaseBdev3", 00:12:15.777 "uuid": "855e4010-5b00-5875-80b4-4a3d97fda6a8", 00:12:15.777 "is_configured": true, 00:12:15.777 "data_offset": 0, 00:12:15.777 "data_size": 65536 00:12:15.777 }, 00:12:15.777 { 00:12:15.777 "name": "BaseBdev4", 00:12:15.777 "uuid": "07fd6cc5-9021-5a41-bbc4-5c931cb1e4b8", 00:12:15.777 "is_configured": true, 00:12:15.777 "data_offset": 0, 00:12:15.777 "data_size": 65536 00:12:15.777 } 00:12:15.777 ] 00:12:15.777 }' 00:12:15.777 18:43:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:15.777 18:43:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:16.036 18:43:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:16.036 18:43:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.036 18:43:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:16.036 18:43:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:16.036 [2024-12-15 18:43:16.377725] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:16.036 18:43:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.036 18:43:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:12:16.036 18:43:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:16.036 18:43:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:16.036 18:43:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.036 18:43:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:16.036 18:43:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.036 18:43:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:12:16.036 18:43:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:12:16.296 18:43:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:16.296 18:43:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:16.296 18:43:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.296 18:43:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:16.296 [2024-12-15 18:43:16.485213] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:16.296 18:43:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.296 18:43:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:16.296 18:43:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:16.296 18:43:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:16.296 18:43:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:16.296 18:43:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:16.296 18:43:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:16.296 18:43:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:16.296 18:43:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:16.296 18:43:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:16.296 18:43:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:16.296 18:43:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:16.296 18:43:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:16.296 18:43:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.296 18:43:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:16.296 18:43:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.296 18:43:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:16.296 "name": "raid_bdev1", 00:12:16.296 "uuid": "7c4a9024-6657-412f-9082-14bd0af47552", 00:12:16.296 "strip_size_kb": 0, 00:12:16.296 "state": "online", 00:12:16.296 "raid_level": "raid1", 00:12:16.296 "superblock": false, 00:12:16.296 "num_base_bdevs": 4, 00:12:16.296 "num_base_bdevs_discovered": 3, 00:12:16.296 "num_base_bdevs_operational": 3, 00:12:16.296 "base_bdevs_list": [ 00:12:16.296 { 00:12:16.296 "name": null, 00:12:16.296 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:16.296 "is_configured": false, 00:12:16.296 "data_offset": 0, 00:12:16.296 "data_size": 65536 00:12:16.296 }, 00:12:16.296 { 00:12:16.296 "name": "BaseBdev2", 00:12:16.296 "uuid": "aac58558-4bcc-555c-afc9-e12d032c50c7", 00:12:16.296 "is_configured": true, 00:12:16.296 "data_offset": 0, 00:12:16.296 "data_size": 65536 00:12:16.296 }, 00:12:16.296 { 00:12:16.296 "name": "BaseBdev3", 00:12:16.296 "uuid": "855e4010-5b00-5875-80b4-4a3d97fda6a8", 00:12:16.296 "is_configured": true, 00:12:16.296 "data_offset": 0, 00:12:16.296 "data_size": 65536 00:12:16.296 }, 00:12:16.296 { 00:12:16.296 "name": "BaseBdev4", 00:12:16.296 "uuid": "07fd6cc5-9021-5a41-bbc4-5c931cb1e4b8", 00:12:16.296 "is_configured": true, 00:12:16.296 "data_offset": 0, 00:12:16.296 "data_size": 65536 00:12:16.296 } 00:12:16.296 ] 00:12:16.296 }' 00:12:16.296 18:43:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:16.296 18:43:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:16.296 [2024-12-15 18:43:16.583107] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:12:16.296 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:16.296 Zero copy mechanism will not be used. 00:12:16.296 Running I/O for 60 seconds... 00:12:16.557 18:43:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:16.557 18:43:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.557 18:43:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:16.557 [2024-12-15 18:43:16.966721] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:16.557 18:43:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.557 18:43:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:16.817 [2024-12-15 18:43:17.014882] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:12:16.817 [2024-12-15 18:43:17.016853] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:16.817 [2024-12-15 18:43:17.118661] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:16.817 [2024-12-15 18:43:17.119092] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:17.077 [2024-12-15 18:43:17.329339] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:17.077 [2024-12-15 18:43:17.330097] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:17.337 169.00 IOPS, 507.00 MiB/s [2024-12-15T18:43:17.778Z] [2024-12-15 18:43:17.672051] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:17.337 [2024-12-15 18:43:17.673372] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:17.597 [2024-12-15 18:43:17.897526] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:17.597 [2024-12-15 18:43:17.898310] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:17.597 18:43:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:17.597 18:43:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:17.597 18:43:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:17.597 18:43:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:17.597 18:43:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:17.597 18:43:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.597 18:43:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.597 18:43:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:17.597 18:43:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:17.597 18:43:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.857 18:43:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:17.857 "name": "raid_bdev1", 00:12:17.857 "uuid": "7c4a9024-6657-412f-9082-14bd0af47552", 00:12:17.857 "strip_size_kb": 0, 00:12:17.857 "state": "online", 00:12:17.857 "raid_level": "raid1", 00:12:17.857 "superblock": false, 00:12:17.857 "num_base_bdevs": 4, 00:12:17.857 "num_base_bdevs_discovered": 4, 00:12:17.857 "num_base_bdevs_operational": 4, 00:12:17.857 "process": { 00:12:17.857 "type": "rebuild", 00:12:17.857 "target": "spare", 00:12:17.857 "progress": { 00:12:17.857 "blocks": 10240, 00:12:17.857 "percent": 15 00:12:17.857 } 00:12:17.857 }, 00:12:17.857 "base_bdevs_list": [ 00:12:17.857 { 00:12:17.857 "name": "spare", 00:12:17.857 "uuid": "15b8f95e-3256-554a-9374-912cc18feb4a", 00:12:17.857 "is_configured": true, 00:12:17.857 "data_offset": 0, 00:12:17.857 "data_size": 65536 00:12:17.857 }, 00:12:17.857 { 00:12:17.857 "name": "BaseBdev2", 00:12:17.857 "uuid": "aac58558-4bcc-555c-afc9-e12d032c50c7", 00:12:17.857 "is_configured": true, 00:12:17.857 "data_offset": 0, 00:12:17.857 "data_size": 65536 00:12:17.857 }, 00:12:17.857 { 00:12:17.857 "name": "BaseBdev3", 00:12:17.857 "uuid": "855e4010-5b00-5875-80b4-4a3d97fda6a8", 00:12:17.857 "is_configured": true, 00:12:17.857 "data_offset": 0, 00:12:17.857 "data_size": 65536 00:12:17.857 }, 00:12:17.857 { 00:12:17.857 "name": "BaseBdev4", 00:12:17.857 "uuid": "07fd6cc5-9021-5a41-bbc4-5c931cb1e4b8", 00:12:17.857 "is_configured": true, 00:12:17.857 "data_offset": 0, 00:12:17.857 "data_size": 65536 00:12:17.857 } 00:12:17.857 ] 00:12:17.857 }' 00:12:17.857 18:43:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:17.857 18:43:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:17.857 18:43:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:17.857 18:43:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:17.857 18:43:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:17.857 18:43:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.857 18:43:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:17.857 [2024-12-15 18:43:18.129329] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:18.117 [2024-12-15 18:43:18.337502] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:18.117 [2024-12-15 18:43:18.341077] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:18.117 [2024-12-15 18:43:18.341193] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:18.117 [2024-12-15 18:43:18.341221] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:18.117 [2024-12-15 18:43:18.352299] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:12:18.117 18:43:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.117 18:43:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:18.117 18:43:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:18.117 18:43:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:18.117 18:43:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:18.117 18:43:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:18.117 18:43:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:18.117 18:43:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:18.117 18:43:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:18.117 18:43:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:18.117 18:43:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:18.117 18:43:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.117 18:43:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:18.117 18:43:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.117 18:43:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:18.117 18:43:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.117 18:43:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:18.117 "name": "raid_bdev1", 00:12:18.117 "uuid": "7c4a9024-6657-412f-9082-14bd0af47552", 00:12:18.117 "strip_size_kb": 0, 00:12:18.117 "state": "online", 00:12:18.117 "raid_level": "raid1", 00:12:18.117 "superblock": false, 00:12:18.117 "num_base_bdevs": 4, 00:12:18.117 "num_base_bdevs_discovered": 3, 00:12:18.117 "num_base_bdevs_operational": 3, 00:12:18.117 "base_bdevs_list": [ 00:12:18.117 { 00:12:18.117 "name": null, 00:12:18.117 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:18.117 "is_configured": false, 00:12:18.117 "data_offset": 0, 00:12:18.117 "data_size": 65536 00:12:18.117 }, 00:12:18.117 { 00:12:18.117 "name": "BaseBdev2", 00:12:18.117 "uuid": "aac58558-4bcc-555c-afc9-e12d032c50c7", 00:12:18.117 "is_configured": true, 00:12:18.117 "data_offset": 0, 00:12:18.117 "data_size": 65536 00:12:18.117 }, 00:12:18.117 { 00:12:18.117 "name": "BaseBdev3", 00:12:18.117 "uuid": "855e4010-5b00-5875-80b4-4a3d97fda6a8", 00:12:18.117 "is_configured": true, 00:12:18.117 "data_offset": 0, 00:12:18.117 "data_size": 65536 00:12:18.117 }, 00:12:18.117 { 00:12:18.117 "name": "BaseBdev4", 00:12:18.117 "uuid": "07fd6cc5-9021-5a41-bbc4-5c931cb1e4b8", 00:12:18.117 "is_configured": true, 00:12:18.117 "data_offset": 0, 00:12:18.117 "data_size": 65536 00:12:18.117 } 00:12:18.117 ] 00:12:18.117 }' 00:12:18.117 18:43:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:18.117 18:43:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:18.637 136.00 IOPS, 408.00 MiB/s [2024-12-15T18:43:19.078Z] 18:43:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:18.637 18:43:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:18.637 18:43:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:18.637 18:43:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:18.637 18:43:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:18.637 18:43:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:18.637 18:43:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.637 18:43:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.637 18:43:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:18.637 18:43:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.637 18:43:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:18.637 "name": "raid_bdev1", 00:12:18.637 "uuid": "7c4a9024-6657-412f-9082-14bd0af47552", 00:12:18.637 "strip_size_kb": 0, 00:12:18.637 "state": "online", 00:12:18.637 "raid_level": "raid1", 00:12:18.637 "superblock": false, 00:12:18.637 "num_base_bdevs": 4, 00:12:18.637 "num_base_bdevs_discovered": 3, 00:12:18.637 "num_base_bdevs_operational": 3, 00:12:18.637 "base_bdevs_list": [ 00:12:18.637 { 00:12:18.637 "name": null, 00:12:18.637 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:18.637 "is_configured": false, 00:12:18.637 "data_offset": 0, 00:12:18.637 "data_size": 65536 00:12:18.637 }, 00:12:18.637 { 00:12:18.637 "name": "BaseBdev2", 00:12:18.637 "uuid": "aac58558-4bcc-555c-afc9-e12d032c50c7", 00:12:18.637 "is_configured": true, 00:12:18.637 "data_offset": 0, 00:12:18.637 "data_size": 65536 00:12:18.637 }, 00:12:18.637 { 00:12:18.637 "name": "BaseBdev3", 00:12:18.637 "uuid": "855e4010-5b00-5875-80b4-4a3d97fda6a8", 00:12:18.637 "is_configured": true, 00:12:18.637 "data_offset": 0, 00:12:18.637 "data_size": 65536 00:12:18.637 }, 00:12:18.637 { 00:12:18.637 "name": "BaseBdev4", 00:12:18.637 "uuid": "07fd6cc5-9021-5a41-bbc4-5c931cb1e4b8", 00:12:18.637 "is_configured": true, 00:12:18.637 "data_offset": 0, 00:12:18.637 "data_size": 65536 00:12:18.637 } 00:12:18.637 ] 00:12:18.637 }' 00:12:18.637 18:43:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:18.637 18:43:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:18.637 18:43:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:18.637 18:43:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:18.637 18:43:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:18.637 18:43:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.637 18:43:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:18.637 [2024-12-15 18:43:18.963436] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:18.637 18:43:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.637 18:43:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:18.637 [2024-12-15 18:43:19.039445] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:12:18.637 [2024-12-15 18:43:19.041483] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:18.897 [2024-12-15 18:43:19.157394] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:18.897 [2024-12-15 18:43:19.158752] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:19.157 [2024-12-15 18:43:19.391293] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:19.157 [2024-12-15 18:43:19.391590] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:19.417 159.67 IOPS, 479.00 MiB/s [2024-12-15T18:43:19.858Z] [2024-12-15 18:43:19.662006] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:19.417 [2024-12-15 18:43:19.662468] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:19.676 [2024-12-15 18:43:19.879316] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:19.676 [2024-12-15 18:43:19.879683] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:19.676 18:43:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:19.676 18:43:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:19.676 18:43:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:19.676 18:43:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:19.676 18:43:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:19.676 18:43:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:19.676 18:43:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:19.676 18:43:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.676 18:43:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:19.676 18:43:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.676 18:43:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:19.676 "name": "raid_bdev1", 00:12:19.676 "uuid": "7c4a9024-6657-412f-9082-14bd0af47552", 00:12:19.676 "strip_size_kb": 0, 00:12:19.676 "state": "online", 00:12:19.676 "raid_level": "raid1", 00:12:19.676 "superblock": false, 00:12:19.676 "num_base_bdevs": 4, 00:12:19.676 "num_base_bdevs_discovered": 4, 00:12:19.676 "num_base_bdevs_operational": 4, 00:12:19.676 "process": { 00:12:19.676 "type": "rebuild", 00:12:19.676 "target": "spare", 00:12:19.676 "progress": { 00:12:19.676 "blocks": 10240, 00:12:19.676 "percent": 15 00:12:19.676 } 00:12:19.676 }, 00:12:19.676 "base_bdevs_list": [ 00:12:19.676 { 00:12:19.676 "name": "spare", 00:12:19.676 "uuid": "15b8f95e-3256-554a-9374-912cc18feb4a", 00:12:19.676 "is_configured": true, 00:12:19.676 "data_offset": 0, 00:12:19.676 "data_size": 65536 00:12:19.676 }, 00:12:19.676 { 00:12:19.676 "name": "BaseBdev2", 00:12:19.676 "uuid": "aac58558-4bcc-555c-afc9-e12d032c50c7", 00:12:19.676 "is_configured": true, 00:12:19.676 "data_offset": 0, 00:12:19.676 "data_size": 65536 00:12:19.676 }, 00:12:19.676 { 00:12:19.676 "name": "BaseBdev3", 00:12:19.676 "uuid": "855e4010-5b00-5875-80b4-4a3d97fda6a8", 00:12:19.676 "is_configured": true, 00:12:19.676 "data_offset": 0, 00:12:19.676 "data_size": 65536 00:12:19.676 }, 00:12:19.676 { 00:12:19.676 "name": "BaseBdev4", 00:12:19.676 "uuid": "07fd6cc5-9021-5a41-bbc4-5c931cb1e4b8", 00:12:19.676 "is_configured": true, 00:12:19.676 "data_offset": 0, 00:12:19.676 "data_size": 65536 00:12:19.676 } 00:12:19.676 ] 00:12:19.676 }' 00:12:19.676 18:43:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:19.676 18:43:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:19.676 18:43:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:19.936 18:43:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:19.936 18:43:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:12:19.936 18:43:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:12:19.936 18:43:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:19.936 18:43:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:12:19.936 18:43:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:19.936 18:43:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.936 18:43:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:19.936 [2024-12-15 18:43:20.143468] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:19.936 [2024-12-15 18:43:20.229124] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:12:19.936 [2024-12-15 18:43:20.229609] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:12:19.936 [2024-12-15 18:43:20.236014] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006080 00:12:19.936 [2024-12-15 18:43:20.236046] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:12:19.936 [2024-12-15 18:43:20.237699] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:12:19.936 18:43:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.936 18:43:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:12:19.936 18:43:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:12:19.936 18:43:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:19.936 18:43:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:19.936 18:43:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:19.936 18:43:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:19.936 18:43:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:19.936 18:43:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:19.936 18:43:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:19.936 18:43:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.936 18:43:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:19.936 18:43:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.936 18:43:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:19.936 "name": "raid_bdev1", 00:12:19.936 "uuid": "7c4a9024-6657-412f-9082-14bd0af47552", 00:12:19.936 "strip_size_kb": 0, 00:12:19.936 "state": "online", 00:12:19.936 "raid_level": "raid1", 00:12:19.936 "superblock": false, 00:12:19.936 "num_base_bdevs": 4, 00:12:19.936 "num_base_bdevs_discovered": 3, 00:12:19.936 "num_base_bdevs_operational": 3, 00:12:19.936 "process": { 00:12:19.936 "type": "rebuild", 00:12:19.936 "target": "spare", 00:12:19.936 "progress": { 00:12:19.936 "blocks": 14336, 00:12:19.936 "percent": 21 00:12:19.936 } 00:12:19.936 }, 00:12:19.936 "base_bdevs_list": [ 00:12:19.936 { 00:12:19.936 "name": "spare", 00:12:19.936 "uuid": "15b8f95e-3256-554a-9374-912cc18feb4a", 00:12:19.936 "is_configured": true, 00:12:19.936 "data_offset": 0, 00:12:19.936 "data_size": 65536 00:12:19.936 }, 00:12:19.936 { 00:12:19.936 "name": null, 00:12:19.936 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:19.936 "is_configured": false, 00:12:19.936 "data_offset": 0, 00:12:19.936 "data_size": 65536 00:12:19.936 }, 00:12:19.936 { 00:12:19.936 "name": "BaseBdev3", 00:12:19.936 "uuid": "855e4010-5b00-5875-80b4-4a3d97fda6a8", 00:12:19.936 "is_configured": true, 00:12:19.936 "data_offset": 0, 00:12:19.936 "data_size": 65536 00:12:19.936 }, 00:12:19.936 { 00:12:19.936 "name": "BaseBdev4", 00:12:19.936 "uuid": "07fd6cc5-9021-5a41-bbc4-5c931cb1e4b8", 00:12:19.936 "is_configured": true, 00:12:19.936 "data_offset": 0, 00:12:19.936 "data_size": 65536 00:12:19.936 } 00:12:19.936 ] 00:12:19.936 }' 00:12:19.936 18:43:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:19.936 18:43:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:19.936 18:43:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:19.936 [2024-12-15 18:43:20.352750] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:12:20.195 18:43:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:20.196 18:43:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=397 00:12:20.196 18:43:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:20.196 18:43:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:20.196 18:43:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:20.196 18:43:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:20.196 18:43:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:20.196 18:43:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:20.196 18:43:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:20.196 18:43:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:20.196 18:43:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.196 18:43:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:20.196 18:43:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.196 18:43:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:20.196 "name": "raid_bdev1", 00:12:20.196 "uuid": "7c4a9024-6657-412f-9082-14bd0af47552", 00:12:20.196 "strip_size_kb": 0, 00:12:20.196 "state": "online", 00:12:20.196 "raid_level": "raid1", 00:12:20.196 "superblock": false, 00:12:20.196 "num_base_bdevs": 4, 00:12:20.196 "num_base_bdevs_discovered": 3, 00:12:20.196 "num_base_bdevs_operational": 3, 00:12:20.196 "process": { 00:12:20.196 "type": "rebuild", 00:12:20.196 "target": "spare", 00:12:20.196 "progress": { 00:12:20.196 "blocks": 16384, 00:12:20.196 "percent": 25 00:12:20.196 } 00:12:20.196 }, 00:12:20.196 "base_bdevs_list": [ 00:12:20.196 { 00:12:20.196 "name": "spare", 00:12:20.196 "uuid": "15b8f95e-3256-554a-9374-912cc18feb4a", 00:12:20.196 "is_configured": true, 00:12:20.196 "data_offset": 0, 00:12:20.196 "data_size": 65536 00:12:20.196 }, 00:12:20.196 { 00:12:20.196 "name": null, 00:12:20.196 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:20.196 "is_configured": false, 00:12:20.196 "data_offset": 0, 00:12:20.196 "data_size": 65536 00:12:20.196 }, 00:12:20.196 { 00:12:20.196 "name": "BaseBdev3", 00:12:20.196 "uuid": "855e4010-5b00-5875-80b4-4a3d97fda6a8", 00:12:20.196 "is_configured": true, 00:12:20.196 "data_offset": 0, 00:12:20.196 "data_size": 65536 00:12:20.196 }, 00:12:20.196 { 00:12:20.196 "name": "BaseBdev4", 00:12:20.196 "uuid": "07fd6cc5-9021-5a41-bbc4-5c931cb1e4b8", 00:12:20.196 "is_configured": true, 00:12:20.196 "data_offset": 0, 00:12:20.196 "data_size": 65536 00:12:20.196 } 00:12:20.196 ] 00:12:20.196 }' 00:12:20.196 18:43:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:20.196 18:43:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:20.196 18:43:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:20.196 18:43:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:20.196 18:43:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:20.196 [2024-12-15 18:43:20.576981] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:12:20.196 [2024-12-15 18:43:20.577834] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:12:20.455 139.25 IOPS, 417.75 MiB/s [2024-12-15T18:43:20.896Z] [2024-12-15 18:43:20.782973] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:12:21.024 [2024-12-15 18:43:21.223382] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:12:21.283 18:43:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:21.283 18:43:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:21.283 18:43:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:21.283 18:43:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:21.283 18:43:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:21.283 18:43:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:21.283 18:43:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:21.283 18:43:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:21.283 18:43:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.283 18:43:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:21.283 18:43:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.283 18:43:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:21.283 "name": "raid_bdev1", 00:12:21.283 "uuid": "7c4a9024-6657-412f-9082-14bd0af47552", 00:12:21.283 "strip_size_kb": 0, 00:12:21.283 "state": "online", 00:12:21.283 "raid_level": "raid1", 00:12:21.283 "superblock": false, 00:12:21.283 "num_base_bdevs": 4, 00:12:21.283 "num_base_bdevs_discovered": 3, 00:12:21.283 "num_base_bdevs_operational": 3, 00:12:21.283 "process": { 00:12:21.283 "type": "rebuild", 00:12:21.283 "target": "spare", 00:12:21.283 "progress": { 00:12:21.283 "blocks": 32768, 00:12:21.283 "percent": 50 00:12:21.283 } 00:12:21.283 }, 00:12:21.283 "base_bdevs_list": [ 00:12:21.283 { 00:12:21.283 "name": "spare", 00:12:21.283 "uuid": "15b8f95e-3256-554a-9374-912cc18feb4a", 00:12:21.283 "is_configured": true, 00:12:21.283 "data_offset": 0, 00:12:21.283 "data_size": 65536 00:12:21.283 }, 00:12:21.283 { 00:12:21.283 "name": null, 00:12:21.283 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:21.283 "is_configured": false, 00:12:21.283 "data_offset": 0, 00:12:21.283 "data_size": 65536 00:12:21.283 }, 00:12:21.283 { 00:12:21.283 "name": "BaseBdev3", 00:12:21.283 "uuid": "855e4010-5b00-5875-80b4-4a3d97fda6a8", 00:12:21.283 "is_configured": true, 00:12:21.283 "data_offset": 0, 00:12:21.283 "data_size": 65536 00:12:21.283 }, 00:12:21.283 { 00:12:21.283 "name": "BaseBdev4", 00:12:21.283 "uuid": "07fd6cc5-9021-5a41-bbc4-5c931cb1e4b8", 00:12:21.283 "is_configured": true, 00:12:21.283 "data_offset": 0, 00:12:21.283 "data_size": 65536 00:12:21.283 } 00:12:21.283 ] 00:12:21.283 }' 00:12:21.283 18:43:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:21.283 119.00 IOPS, 357.00 MiB/s [2024-12-15T18:43:21.724Z] 18:43:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:21.283 18:43:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:21.283 18:43:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:21.283 18:43:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:21.543 [2024-12-15 18:43:21.774866] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:12:21.803 [2024-12-15 18:43:21.995177] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:12:21.803 [2024-12-15 18:43:21.995474] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:12:22.373 103.50 IOPS, 310.50 MiB/s [2024-12-15T18:43:22.814Z] 18:43:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:22.373 18:43:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:22.373 18:43:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:22.373 18:43:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:22.373 18:43:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:22.373 18:43:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:22.373 18:43:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:22.373 18:43:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:22.373 18:43:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.373 18:43:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:22.373 18:43:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.373 18:43:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:22.373 "name": "raid_bdev1", 00:12:22.373 "uuid": "7c4a9024-6657-412f-9082-14bd0af47552", 00:12:22.373 "strip_size_kb": 0, 00:12:22.373 "state": "online", 00:12:22.373 "raid_level": "raid1", 00:12:22.373 "superblock": false, 00:12:22.373 "num_base_bdevs": 4, 00:12:22.373 "num_base_bdevs_discovered": 3, 00:12:22.373 "num_base_bdevs_operational": 3, 00:12:22.373 "process": { 00:12:22.373 "type": "rebuild", 00:12:22.373 "target": "spare", 00:12:22.373 "progress": { 00:12:22.373 "blocks": 51200, 00:12:22.373 "percent": 78 00:12:22.373 } 00:12:22.373 }, 00:12:22.373 "base_bdevs_list": [ 00:12:22.373 { 00:12:22.373 "name": "spare", 00:12:22.373 "uuid": "15b8f95e-3256-554a-9374-912cc18feb4a", 00:12:22.373 "is_configured": true, 00:12:22.373 "data_offset": 0, 00:12:22.373 "data_size": 65536 00:12:22.373 }, 00:12:22.373 { 00:12:22.373 "name": null, 00:12:22.373 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:22.373 "is_configured": false, 00:12:22.373 "data_offset": 0, 00:12:22.373 "data_size": 65536 00:12:22.373 }, 00:12:22.373 { 00:12:22.373 "name": "BaseBdev3", 00:12:22.373 "uuid": "855e4010-5b00-5875-80b4-4a3d97fda6a8", 00:12:22.373 "is_configured": true, 00:12:22.373 "data_offset": 0, 00:12:22.373 "data_size": 65536 00:12:22.373 }, 00:12:22.373 { 00:12:22.373 "name": "BaseBdev4", 00:12:22.373 "uuid": "07fd6cc5-9021-5a41-bbc4-5c931cb1e4b8", 00:12:22.373 "is_configured": true, 00:12:22.373 "data_offset": 0, 00:12:22.373 "data_size": 65536 00:12:22.373 } 00:12:22.373 ] 00:12:22.373 }' 00:12:22.373 18:43:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:22.373 18:43:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:22.373 18:43:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:22.373 18:43:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:22.373 18:43:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:22.633 [2024-12-15 18:43:22.990271] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:12:23.202 [2024-12-15 18:43:23.424682] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:23.202 [2024-12-15 18:43:23.529450] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:23.202 [2024-12-15 18:43:23.532696] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:23.462 93.86 IOPS, 281.57 MiB/s [2024-12-15T18:43:23.903Z] 18:43:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:23.462 18:43:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:23.462 18:43:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:23.462 18:43:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:23.462 18:43:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:23.463 18:43:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:23.463 18:43:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.463 18:43:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:23.463 18:43:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.463 18:43:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:23.463 18:43:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.463 18:43:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:23.463 "name": "raid_bdev1", 00:12:23.463 "uuid": "7c4a9024-6657-412f-9082-14bd0af47552", 00:12:23.463 "strip_size_kb": 0, 00:12:23.463 "state": "online", 00:12:23.463 "raid_level": "raid1", 00:12:23.463 "superblock": false, 00:12:23.463 "num_base_bdevs": 4, 00:12:23.463 "num_base_bdevs_discovered": 3, 00:12:23.463 "num_base_bdevs_operational": 3, 00:12:23.463 "base_bdevs_list": [ 00:12:23.463 { 00:12:23.463 "name": "spare", 00:12:23.463 "uuid": "15b8f95e-3256-554a-9374-912cc18feb4a", 00:12:23.463 "is_configured": true, 00:12:23.463 "data_offset": 0, 00:12:23.463 "data_size": 65536 00:12:23.463 }, 00:12:23.463 { 00:12:23.463 "name": null, 00:12:23.463 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:23.463 "is_configured": false, 00:12:23.463 "data_offset": 0, 00:12:23.463 "data_size": 65536 00:12:23.463 }, 00:12:23.463 { 00:12:23.463 "name": "BaseBdev3", 00:12:23.463 "uuid": "855e4010-5b00-5875-80b4-4a3d97fda6a8", 00:12:23.463 "is_configured": true, 00:12:23.463 "data_offset": 0, 00:12:23.463 "data_size": 65536 00:12:23.463 }, 00:12:23.463 { 00:12:23.463 "name": "BaseBdev4", 00:12:23.463 "uuid": "07fd6cc5-9021-5a41-bbc4-5c931cb1e4b8", 00:12:23.463 "is_configured": true, 00:12:23.463 "data_offset": 0, 00:12:23.463 "data_size": 65536 00:12:23.463 } 00:12:23.463 ] 00:12:23.463 }' 00:12:23.463 18:43:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:23.463 18:43:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:23.463 18:43:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:23.722 18:43:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:23.722 18:43:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:12:23.722 18:43:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:23.722 18:43:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:23.722 18:43:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:23.722 18:43:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:23.722 18:43:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:23.722 18:43:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.722 18:43:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.722 18:43:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:23.722 18:43:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:23.722 18:43:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.723 18:43:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:23.723 "name": "raid_bdev1", 00:12:23.723 "uuid": "7c4a9024-6657-412f-9082-14bd0af47552", 00:12:23.723 "strip_size_kb": 0, 00:12:23.723 "state": "online", 00:12:23.723 "raid_level": "raid1", 00:12:23.723 "superblock": false, 00:12:23.723 "num_base_bdevs": 4, 00:12:23.723 "num_base_bdevs_discovered": 3, 00:12:23.723 "num_base_bdevs_operational": 3, 00:12:23.723 "base_bdevs_list": [ 00:12:23.723 { 00:12:23.723 "name": "spare", 00:12:23.723 "uuid": "15b8f95e-3256-554a-9374-912cc18feb4a", 00:12:23.723 "is_configured": true, 00:12:23.723 "data_offset": 0, 00:12:23.723 "data_size": 65536 00:12:23.723 }, 00:12:23.723 { 00:12:23.723 "name": null, 00:12:23.723 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:23.723 "is_configured": false, 00:12:23.723 "data_offset": 0, 00:12:23.723 "data_size": 65536 00:12:23.723 }, 00:12:23.723 { 00:12:23.723 "name": "BaseBdev3", 00:12:23.723 "uuid": "855e4010-5b00-5875-80b4-4a3d97fda6a8", 00:12:23.723 "is_configured": true, 00:12:23.723 "data_offset": 0, 00:12:23.723 "data_size": 65536 00:12:23.723 }, 00:12:23.723 { 00:12:23.723 "name": "BaseBdev4", 00:12:23.723 "uuid": "07fd6cc5-9021-5a41-bbc4-5c931cb1e4b8", 00:12:23.723 "is_configured": true, 00:12:23.723 "data_offset": 0, 00:12:23.723 "data_size": 65536 00:12:23.723 } 00:12:23.723 ] 00:12:23.723 }' 00:12:23.723 18:43:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:23.723 18:43:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:23.723 18:43:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:23.723 18:43:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:23.723 18:43:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:23.723 18:43:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:23.723 18:43:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:23.723 18:43:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:23.723 18:43:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:23.723 18:43:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:23.723 18:43:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:23.723 18:43:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:23.723 18:43:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:23.723 18:43:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:23.723 18:43:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.723 18:43:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.723 18:43:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:23.723 18:43:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:23.723 18:43:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.723 18:43:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:23.723 "name": "raid_bdev1", 00:12:23.723 "uuid": "7c4a9024-6657-412f-9082-14bd0af47552", 00:12:23.723 "strip_size_kb": 0, 00:12:23.723 "state": "online", 00:12:23.723 "raid_level": "raid1", 00:12:23.723 "superblock": false, 00:12:23.723 "num_base_bdevs": 4, 00:12:23.723 "num_base_bdevs_discovered": 3, 00:12:23.723 "num_base_bdevs_operational": 3, 00:12:23.723 "base_bdevs_list": [ 00:12:23.723 { 00:12:23.723 "name": "spare", 00:12:23.723 "uuid": "15b8f95e-3256-554a-9374-912cc18feb4a", 00:12:23.723 "is_configured": true, 00:12:23.723 "data_offset": 0, 00:12:23.723 "data_size": 65536 00:12:23.723 }, 00:12:23.723 { 00:12:23.723 "name": null, 00:12:23.723 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:23.723 "is_configured": false, 00:12:23.723 "data_offset": 0, 00:12:23.723 "data_size": 65536 00:12:23.723 }, 00:12:23.723 { 00:12:23.723 "name": "BaseBdev3", 00:12:23.723 "uuid": "855e4010-5b00-5875-80b4-4a3d97fda6a8", 00:12:23.723 "is_configured": true, 00:12:23.723 "data_offset": 0, 00:12:23.723 "data_size": 65536 00:12:23.723 }, 00:12:23.723 { 00:12:23.723 "name": "BaseBdev4", 00:12:23.723 "uuid": "07fd6cc5-9021-5a41-bbc4-5c931cb1e4b8", 00:12:23.723 "is_configured": true, 00:12:23.723 "data_offset": 0, 00:12:23.723 "data_size": 65536 00:12:23.723 } 00:12:23.723 ] 00:12:23.723 }' 00:12:23.723 18:43:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:23.723 18:43:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:24.293 18:43:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:24.293 18:43:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.293 18:43:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:24.293 [2024-12-15 18:43:24.505247] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:24.293 [2024-12-15 18:43:24.505331] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:24.293 86.25 IOPS, 258.75 MiB/s 00:12:24.293 Latency(us) 00:12:24.293 [2024-12-15T18:43:24.734Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:24.293 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:12:24.293 raid_bdev1 : 8.04 86.12 258.36 0.00 0.00 15148.07 289.76 117220.72 00:12:24.293 [2024-12-15T18:43:24.734Z] =================================================================================================================== 00:12:24.293 [2024-12-15T18:43:24.734Z] Total : 86.12 258.36 0.00 0.00 15148.07 289.76 117220.72 00:12:24.293 [2024-12-15 18:43:24.608260] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:24.293 [2024-12-15 18:43:24.608344] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:24.293 [2024-12-15 18:43:24.608455] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:24.293 [2024-12-15 18:43:24.608499] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:12:24.293 { 00:12:24.293 "results": [ 00:12:24.293 { 00:12:24.293 "job": "raid_bdev1", 00:12:24.293 "core_mask": "0x1", 00:12:24.293 "workload": "randrw", 00:12:24.293 "percentage": 50, 00:12:24.293 "status": "finished", 00:12:24.293 "queue_depth": 2, 00:12:24.293 "io_size": 3145728, 00:12:24.293 "runtime": 8.035187, 00:12:24.293 "iops": 86.12120663775467, 00:12:24.293 "mibps": 258.363619913264, 00:12:24.293 "io_failed": 0, 00:12:24.293 "io_timeout": 0, 00:12:24.293 "avg_latency_us": 15148.069969962391, 00:12:24.293 "min_latency_us": 289.7606986899563, 00:12:24.293 "max_latency_us": 117220.7231441048 00:12:24.293 } 00:12:24.293 ], 00:12:24.293 "core_count": 1 00:12:24.293 } 00:12:24.293 18:43:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.293 18:43:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:24.293 18:43:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.293 18:43:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:12:24.293 18:43:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:24.293 18:43:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.293 18:43:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:24.293 18:43:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:24.293 18:43:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:12:24.293 18:43:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:12:24.293 18:43:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:24.293 18:43:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:12:24.293 18:43:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:24.293 18:43:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:24.293 18:43:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:24.293 18:43:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:12:24.293 18:43:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:24.293 18:43:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:24.293 18:43:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:12:24.554 /dev/nbd0 00:12:24.554 18:43:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:24.554 18:43:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:24.554 18:43:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:24.554 18:43:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:12:24.554 18:43:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:24.554 18:43:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:24.554 18:43:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:24.554 18:43:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:12:24.554 18:43:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:24.554 18:43:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:24.554 18:43:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:24.554 1+0 records in 00:12:24.554 1+0 records out 00:12:24.554 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000414909 s, 9.9 MB/s 00:12:24.554 18:43:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:24.554 18:43:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:12:24.554 18:43:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:24.554 18:43:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:24.554 18:43:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:12:24.554 18:43:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:24.554 18:43:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:24.554 18:43:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:12:24.554 18:43:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:12:24.554 18:43:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@728 -- # continue 00:12:24.554 18:43:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:12:24.554 18:43:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:12:24.554 18:43:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:12:24.554 18:43:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:24.554 18:43:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:12:24.554 18:43:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:24.554 18:43:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:12:24.554 18:43:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:24.554 18:43:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:12:24.554 18:43:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:24.554 18:43:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:24.554 18:43:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:12:24.814 /dev/nbd1 00:12:24.814 18:43:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:24.814 18:43:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:24.814 18:43:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:12:24.814 18:43:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:12:24.814 18:43:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:24.814 18:43:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:24.814 18:43:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:12:24.814 18:43:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:12:24.814 18:43:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:24.814 18:43:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:24.814 18:43:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:24.814 1+0 records in 00:12:24.814 1+0 records out 00:12:24.814 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00048939 s, 8.4 MB/s 00:12:24.814 18:43:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:24.814 18:43:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:12:24.814 18:43:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:24.814 18:43:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:24.814 18:43:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:12:24.814 18:43:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:24.814 18:43:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:24.814 18:43:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:12:24.814 18:43:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:12:24.814 18:43:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:24.814 18:43:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:12:24.814 18:43:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:24.814 18:43:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:12:24.814 18:43:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:24.814 18:43:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:25.078 18:43:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:25.078 18:43:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:25.078 18:43:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:25.078 18:43:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:25.078 18:43:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:25.078 18:43:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:25.078 18:43:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:12:25.078 18:43:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:25.078 18:43:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:12:25.078 18:43:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:12:25.078 18:43:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:12:25.078 18:43:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:25.078 18:43:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:12:25.078 18:43:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:25.078 18:43:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:12:25.078 18:43:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:25.078 18:43:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:12:25.078 18:43:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:25.078 18:43:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:25.078 18:43:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:12:25.343 /dev/nbd1 00:12:25.343 18:43:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:25.343 18:43:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:25.343 18:43:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:12:25.343 18:43:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:12:25.343 18:43:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:25.343 18:43:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:25.343 18:43:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:12:25.343 18:43:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:12:25.343 18:43:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:25.343 18:43:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:25.343 18:43:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:25.343 1+0 records in 00:12:25.343 1+0 records out 00:12:25.343 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00048972 s, 8.4 MB/s 00:12:25.343 18:43:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:25.343 18:43:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:12:25.343 18:43:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:25.343 18:43:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:25.343 18:43:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:12:25.343 18:43:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:25.343 18:43:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:25.343 18:43:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:12:25.343 18:43:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:12:25.343 18:43:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:25.343 18:43:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:12:25.343 18:43:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:25.343 18:43:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:12:25.343 18:43:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:25.343 18:43:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:25.603 18:43:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:25.603 18:43:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:25.603 18:43:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:25.603 18:43:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:25.603 18:43:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:25.603 18:43:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:25.603 18:43:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:12:25.603 18:43:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:25.603 18:43:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:25.603 18:43:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:25.603 18:43:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:25.603 18:43:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:25.603 18:43:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:12:25.603 18:43:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:25.603 18:43:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:25.863 18:43:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:25.863 18:43:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:25.863 18:43:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:25.864 18:43:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:25.864 18:43:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:25.864 18:43:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:25.864 18:43:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:12:25.864 18:43:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:25.864 18:43:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:12:25.864 18:43:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 91285 00:12:25.864 18:43:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 91285 ']' 00:12:25.864 18:43:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 91285 00:12:25.864 18:43:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:12:25.864 18:43:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:25.864 18:43:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 91285 00:12:25.864 18:43:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:25.864 18:43:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:25.864 18:43:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 91285' 00:12:25.864 killing process with pid 91285 00:12:25.864 18:43:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 91285 00:12:25.864 Received shutdown signal, test time was about 9.681907 seconds 00:12:25.864 00:12:25.864 Latency(us) 00:12:25.864 [2024-12-15T18:43:26.305Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:25.864 [2024-12-15T18:43:26.305Z] =================================================================================================================== 00:12:25.864 [2024-12-15T18:43:26.305Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:25.864 [2024-12-15 18:43:26.248600] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:25.864 18:43:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 91285 00:12:25.864 [2024-12-15 18:43:26.295661] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:26.124 18:43:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:12:26.124 ************************************ 00:12:26.124 END TEST raid_rebuild_test_io 00:12:26.124 ************************************ 00:12:26.124 00:12:26.124 real 0m11.632s 00:12:26.124 user 0m14.994s 00:12:26.124 sys 0m1.807s 00:12:26.124 18:43:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:26.124 18:43:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:26.384 18:43:26 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:12:26.384 18:43:26 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:12:26.384 18:43:26 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:26.384 18:43:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:26.384 ************************************ 00:12:26.384 START TEST raid_rebuild_test_sb_io 00:12:26.384 ************************************ 00:12:26.384 18:43:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true true true 00:12:26.384 18:43:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:26.384 18:43:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:12:26.384 18:43:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:12:26.384 18:43:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:12:26.384 18:43:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:26.384 18:43:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:26.384 18:43:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:26.384 18:43:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:26.384 18:43:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:26.384 18:43:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:26.384 18:43:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:26.384 18:43:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:26.384 18:43:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:26.384 18:43:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:12:26.384 18:43:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:26.384 18:43:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:26.384 18:43:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:12:26.384 18:43:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:26.384 18:43:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:26.384 18:43:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:26.384 18:43:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:26.384 18:43:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:26.384 18:43:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:26.384 18:43:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:26.384 18:43:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:26.384 18:43:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:26.384 18:43:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:26.384 18:43:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:26.384 18:43:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:12:26.384 18:43:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:12:26.384 18:43:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=91683 00:12:26.384 18:43:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:26.384 18:43:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 91683 00:12:26.384 18:43:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 91683 ']' 00:12:26.384 18:43:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:26.384 18:43:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:26.384 18:43:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:26.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:26.384 18:43:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:26.384 18:43:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:26.384 [2024-12-15 18:43:26.691094] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:12:26.384 [2024-12-15 18:43:26.691372] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91683 ] 00:12:26.384 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:26.384 Zero copy mechanism will not be used. 00:12:26.644 [2024-12-15 18:43:26.865421] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:26.644 [2024-12-15 18:43:26.891216] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:12:26.644 [2024-12-15 18:43:26.933942] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:26.644 [2024-12-15 18:43:26.934062] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:27.214 18:43:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:27.214 18:43:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:12:27.214 18:43:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:27.214 18:43:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:27.214 18:43:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.214 18:43:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:27.214 BaseBdev1_malloc 00:12:27.214 18:43:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.214 18:43:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:27.214 18:43:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.214 18:43:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:27.214 [2024-12-15 18:43:27.530513] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:27.214 [2024-12-15 18:43:27.530573] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:27.214 [2024-12-15 18:43:27.530618] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:27.214 [2024-12-15 18:43:27.530629] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:27.214 [2024-12-15 18:43:27.532739] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:27.214 [2024-12-15 18:43:27.532835] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:27.214 BaseBdev1 00:12:27.214 18:43:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.214 18:43:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:27.214 18:43:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:27.214 18:43:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.214 18:43:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:27.214 BaseBdev2_malloc 00:12:27.214 18:43:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.214 18:43:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:27.214 18:43:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.214 18:43:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:27.214 [2024-12-15 18:43:27.559208] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:27.214 [2024-12-15 18:43:27.559257] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:27.214 [2024-12-15 18:43:27.559295] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:27.214 [2024-12-15 18:43:27.559303] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:27.214 [2024-12-15 18:43:27.561368] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:27.214 [2024-12-15 18:43:27.561406] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:27.214 BaseBdev2 00:12:27.214 18:43:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.214 18:43:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:27.214 18:43:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:27.214 18:43:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.214 18:43:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:27.214 BaseBdev3_malloc 00:12:27.214 18:43:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.214 18:43:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:12:27.214 18:43:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.214 18:43:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:27.214 [2024-12-15 18:43:27.587739] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:12:27.214 [2024-12-15 18:43:27.587845] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:27.214 [2024-12-15 18:43:27.587890] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:27.214 [2024-12-15 18:43:27.587900] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:27.214 [2024-12-15 18:43:27.589972] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:27.214 [2024-12-15 18:43:27.590006] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:27.214 BaseBdev3 00:12:27.214 18:43:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.214 18:43:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:27.214 18:43:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:27.214 18:43:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.214 18:43:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:27.214 BaseBdev4_malloc 00:12:27.214 18:43:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.214 18:43:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:12:27.214 18:43:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.214 18:43:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:27.214 [2024-12-15 18:43:27.628073] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:12:27.214 [2024-12-15 18:43:27.628129] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:27.214 [2024-12-15 18:43:27.628155] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:12:27.214 [2024-12-15 18:43:27.628163] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:27.214 [2024-12-15 18:43:27.630241] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:27.214 [2024-12-15 18:43:27.630327] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:27.214 BaseBdev4 00:12:27.214 18:43:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.214 18:43:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:27.214 18:43:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.214 18:43:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:27.214 spare_malloc 00:12:27.214 18:43:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.214 18:43:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:27.214 18:43:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.214 18:43:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:27.473 spare_delay 00:12:27.473 18:43:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.473 18:43:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:27.473 18:43:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.473 18:43:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:27.473 [2024-12-15 18:43:27.668695] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:27.473 [2024-12-15 18:43:27.668794] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:27.474 [2024-12-15 18:43:27.668852] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:12:27.474 [2024-12-15 18:43:27.668863] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:27.474 [2024-12-15 18:43:27.671026] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:27.474 [2024-12-15 18:43:27.671060] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:27.474 spare 00:12:27.474 18:43:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.474 18:43:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:12:27.474 18:43:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.474 18:43:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:27.474 [2024-12-15 18:43:27.680754] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:27.474 [2024-12-15 18:43:27.682552] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:27.474 [2024-12-15 18:43:27.682611] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:27.474 [2024-12-15 18:43:27.682649] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:27.474 [2024-12-15 18:43:27.682796] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:12:27.474 [2024-12-15 18:43:27.682823] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:27.474 [2024-12-15 18:43:27.683042] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:27.474 [2024-12-15 18:43:27.683169] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:12:27.474 [2024-12-15 18:43:27.683182] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:12:27.474 [2024-12-15 18:43:27.683299] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:27.474 18:43:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.474 18:43:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:27.474 18:43:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:27.474 18:43:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:27.474 18:43:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:27.474 18:43:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:27.474 18:43:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:27.474 18:43:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:27.474 18:43:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:27.474 18:43:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:27.474 18:43:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:27.474 18:43:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:27.474 18:43:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:27.474 18:43:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.474 18:43:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:27.474 18:43:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.474 18:43:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:27.474 "name": "raid_bdev1", 00:12:27.474 "uuid": "9e25edde-dc25-4605-a3ed-0d2e387a05a0", 00:12:27.474 "strip_size_kb": 0, 00:12:27.474 "state": "online", 00:12:27.474 "raid_level": "raid1", 00:12:27.474 "superblock": true, 00:12:27.474 "num_base_bdevs": 4, 00:12:27.474 "num_base_bdevs_discovered": 4, 00:12:27.474 "num_base_bdevs_operational": 4, 00:12:27.474 "base_bdevs_list": [ 00:12:27.474 { 00:12:27.474 "name": "BaseBdev1", 00:12:27.474 "uuid": "09c71d58-6176-581b-8704-fbe59fc1de71", 00:12:27.474 "is_configured": true, 00:12:27.474 "data_offset": 2048, 00:12:27.474 "data_size": 63488 00:12:27.474 }, 00:12:27.474 { 00:12:27.474 "name": "BaseBdev2", 00:12:27.474 "uuid": "57572aeb-97a9-5fff-a34a-a2842ef4c107", 00:12:27.474 "is_configured": true, 00:12:27.474 "data_offset": 2048, 00:12:27.474 "data_size": 63488 00:12:27.474 }, 00:12:27.474 { 00:12:27.474 "name": "BaseBdev3", 00:12:27.474 "uuid": "095c5881-4c23-5efd-9f2f-ab105670d46c", 00:12:27.474 "is_configured": true, 00:12:27.474 "data_offset": 2048, 00:12:27.474 "data_size": 63488 00:12:27.474 }, 00:12:27.474 { 00:12:27.474 "name": "BaseBdev4", 00:12:27.474 "uuid": "52fa80dc-b9e3-5c52-912b-92603c8154af", 00:12:27.474 "is_configured": true, 00:12:27.474 "data_offset": 2048, 00:12:27.474 "data_size": 63488 00:12:27.474 } 00:12:27.474 ] 00:12:27.474 }' 00:12:27.474 18:43:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:27.474 18:43:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:27.733 18:43:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:27.733 18:43:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:27.733 18:43:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.733 18:43:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:27.733 [2024-12-15 18:43:28.152316] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:27.994 18:43:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.994 18:43:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:12:27.994 18:43:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:27.994 18:43:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:27.994 18:43:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.994 18:43:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:27.994 18:43:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.994 18:43:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:12:27.994 18:43:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:12:27.994 18:43:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:27.994 18:43:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:27.994 18:43:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.994 18:43:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:27.994 [2024-12-15 18:43:28.251764] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:27.994 18:43:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.994 18:43:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:27.994 18:43:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:27.994 18:43:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:27.994 18:43:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:27.994 18:43:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:27.994 18:43:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:27.994 18:43:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:27.994 18:43:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:27.994 18:43:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:27.994 18:43:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:27.994 18:43:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:27.994 18:43:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:27.994 18:43:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.994 18:43:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:27.994 18:43:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.994 18:43:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:27.994 "name": "raid_bdev1", 00:12:27.994 "uuid": "9e25edde-dc25-4605-a3ed-0d2e387a05a0", 00:12:27.994 "strip_size_kb": 0, 00:12:27.994 "state": "online", 00:12:27.994 "raid_level": "raid1", 00:12:27.994 "superblock": true, 00:12:27.994 "num_base_bdevs": 4, 00:12:27.994 "num_base_bdevs_discovered": 3, 00:12:27.994 "num_base_bdevs_operational": 3, 00:12:27.994 "base_bdevs_list": [ 00:12:27.994 { 00:12:27.994 "name": null, 00:12:27.994 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:27.994 "is_configured": false, 00:12:27.994 "data_offset": 0, 00:12:27.994 "data_size": 63488 00:12:27.994 }, 00:12:27.994 { 00:12:27.994 "name": "BaseBdev2", 00:12:27.994 "uuid": "57572aeb-97a9-5fff-a34a-a2842ef4c107", 00:12:27.994 "is_configured": true, 00:12:27.994 "data_offset": 2048, 00:12:27.994 "data_size": 63488 00:12:27.994 }, 00:12:27.994 { 00:12:27.994 "name": "BaseBdev3", 00:12:27.994 "uuid": "095c5881-4c23-5efd-9f2f-ab105670d46c", 00:12:27.994 "is_configured": true, 00:12:27.994 "data_offset": 2048, 00:12:27.994 "data_size": 63488 00:12:27.994 }, 00:12:27.994 { 00:12:27.994 "name": "BaseBdev4", 00:12:27.994 "uuid": "52fa80dc-b9e3-5c52-912b-92603c8154af", 00:12:27.994 "is_configured": true, 00:12:27.994 "data_offset": 2048, 00:12:27.994 "data_size": 63488 00:12:27.994 } 00:12:27.994 ] 00:12:27.994 }' 00:12:27.994 18:43:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:27.994 18:43:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:27.994 [2024-12-15 18:43:28.341606] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:12:27.994 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:27.994 Zero copy mechanism will not be used. 00:12:27.994 Running I/O for 60 seconds... 00:12:28.564 18:43:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:28.565 18:43:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.565 18:43:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:28.565 [2024-12-15 18:43:28.705377] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:28.565 18:43:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.565 18:43:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:28.565 [2024-12-15 18:43:28.759031] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:12:28.565 [2024-12-15 18:43:28.761095] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:28.565 [2024-12-15 18:43:28.876142] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:28.565 [2024-12-15 18:43:28.877482] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:28.825 [2024-12-15 18:43:29.098994] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:28.825 [2024-12-15 18:43:29.099731] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:29.084 203.00 IOPS, 609.00 MiB/s [2024-12-15T18:43:29.526Z] [2024-12-15 18:43:29.430080] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:29.344 18:43:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:29.344 18:43:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:29.344 18:43:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:29.344 18:43:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:29.344 18:43:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:29.344 18:43:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:29.344 18:43:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:29.344 18:43:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.344 18:43:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:29.344 18:43:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.605 [2024-12-15 18:43:29.796640] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:12:29.605 18:43:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:29.605 "name": "raid_bdev1", 00:12:29.605 "uuid": "9e25edde-dc25-4605-a3ed-0d2e387a05a0", 00:12:29.605 "strip_size_kb": 0, 00:12:29.605 "state": "online", 00:12:29.605 "raid_level": "raid1", 00:12:29.605 "superblock": true, 00:12:29.605 "num_base_bdevs": 4, 00:12:29.605 "num_base_bdevs_discovered": 4, 00:12:29.605 "num_base_bdevs_operational": 4, 00:12:29.605 "process": { 00:12:29.605 "type": "rebuild", 00:12:29.605 "target": "spare", 00:12:29.605 "progress": { 00:12:29.605 "blocks": 12288, 00:12:29.605 "percent": 19 00:12:29.605 } 00:12:29.605 }, 00:12:29.605 "base_bdevs_list": [ 00:12:29.605 { 00:12:29.605 "name": "spare", 00:12:29.605 "uuid": "e680d06c-e117-52a7-91ae-86959e6bdb22", 00:12:29.605 "is_configured": true, 00:12:29.605 "data_offset": 2048, 00:12:29.605 "data_size": 63488 00:12:29.605 }, 00:12:29.605 { 00:12:29.605 "name": "BaseBdev2", 00:12:29.605 "uuid": "57572aeb-97a9-5fff-a34a-a2842ef4c107", 00:12:29.605 "is_configured": true, 00:12:29.605 "data_offset": 2048, 00:12:29.605 "data_size": 63488 00:12:29.605 }, 00:12:29.605 { 00:12:29.605 "name": "BaseBdev3", 00:12:29.605 "uuid": "095c5881-4c23-5efd-9f2f-ab105670d46c", 00:12:29.605 "is_configured": true, 00:12:29.605 "data_offset": 2048, 00:12:29.605 "data_size": 63488 00:12:29.605 }, 00:12:29.605 { 00:12:29.605 "name": "BaseBdev4", 00:12:29.605 "uuid": "52fa80dc-b9e3-5c52-912b-92603c8154af", 00:12:29.605 "is_configured": true, 00:12:29.605 "data_offset": 2048, 00:12:29.605 "data_size": 63488 00:12:29.605 } 00:12:29.605 ] 00:12:29.605 }' 00:12:29.605 18:43:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:29.605 18:43:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:29.605 18:43:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:29.605 18:43:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:29.605 18:43:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:29.605 18:43:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.605 18:43:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:29.605 [2024-12-15 18:43:29.870130] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:29.605 [2024-12-15 18:43:30.013997] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:29.605 [2024-12-15 18:43:30.024080] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:29.605 [2024-12-15 18:43:30.024206] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:29.605 [2024-12-15 18:43:30.024235] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:29.605 [2024-12-15 18:43:30.042419] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:12:29.866 18:43:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.866 18:43:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:29.866 18:43:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:29.866 18:43:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:29.866 18:43:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:29.866 18:43:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:29.866 18:43:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:29.866 18:43:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:29.866 18:43:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:29.866 18:43:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:29.866 18:43:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:29.866 18:43:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:29.866 18:43:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:29.866 18:43:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.866 18:43:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:29.866 18:43:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.866 18:43:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:29.866 "name": "raid_bdev1", 00:12:29.866 "uuid": "9e25edde-dc25-4605-a3ed-0d2e387a05a0", 00:12:29.866 "strip_size_kb": 0, 00:12:29.866 "state": "online", 00:12:29.866 "raid_level": "raid1", 00:12:29.866 "superblock": true, 00:12:29.866 "num_base_bdevs": 4, 00:12:29.866 "num_base_bdevs_discovered": 3, 00:12:29.866 "num_base_bdevs_operational": 3, 00:12:29.866 "base_bdevs_list": [ 00:12:29.866 { 00:12:29.866 "name": null, 00:12:29.866 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:29.866 "is_configured": false, 00:12:29.866 "data_offset": 0, 00:12:29.866 "data_size": 63488 00:12:29.866 }, 00:12:29.866 { 00:12:29.866 "name": "BaseBdev2", 00:12:29.866 "uuid": "57572aeb-97a9-5fff-a34a-a2842ef4c107", 00:12:29.866 "is_configured": true, 00:12:29.866 "data_offset": 2048, 00:12:29.866 "data_size": 63488 00:12:29.866 }, 00:12:29.866 { 00:12:29.866 "name": "BaseBdev3", 00:12:29.866 "uuid": "095c5881-4c23-5efd-9f2f-ab105670d46c", 00:12:29.866 "is_configured": true, 00:12:29.866 "data_offset": 2048, 00:12:29.866 "data_size": 63488 00:12:29.866 }, 00:12:29.866 { 00:12:29.866 "name": "BaseBdev4", 00:12:29.866 "uuid": "52fa80dc-b9e3-5c52-912b-92603c8154af", 00:12:29.866 "is_configured": true, 00:12:29.866 "data_offset": 2048, 00:12:29.866 "data_size": 63488 00:12:29.866 } 00:12:29.866 ] 00:12:29.866 }' 00:12:29.866 18:43:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:29.866 18:43:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:30.125 175.50 IOPS, 526.50 MiB/s [2024-12-15T18:43:30.566Z] 18:43:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:30.125 18:43:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:30.125 18:43:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:30.125 18:43:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:30.125 18:43:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:30.125 18:43:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.125 18:43:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:30.125 18:43:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.125 18:43:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:30.125 18:43:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.125 18:43:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:30.125 "name": "raid_bdev1", 00:12:30.125 "uuid": "9e25edde-dc25-4605-a3ed-0d2e387a05a0", 00:12:30.125 "strip_size_kb": 0, 00:12:30.125 "state": "online", 00:12:30.125 "raid_level": "raid1", 00:12:30.125 "superblock": true, 00:12:30.125 "num_base_bdevs": 4, 00:12:30.125 "num_base_bdevs_discovered": 3, 00:12:30.125 "num_base_bdevs_operational": 3, 00:12:30.125 "base_bdevs_list": [ 00:12:30.125 { 00:12:30.125 "name": null, 00:12:30.125 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:30.125 "is_configured": false, 00:12:30.125 "data_offset": 0, 00:12:30.125 "data_size": 63488 00:12:30.125 }, 00:12:30.125 { 00:12:30.125 "name": "BaseBdev2", 00:12:30.125 "uuid": "57572aeb-97a9-5fff-a34a-a2842ef4c107", 00:12:30.125 "is_configured": true, 00:12:30.125 "data_offset": 2048, 00:12:30.125 "data_size": 63488 00:12:30.125 }, 00:12:30.125 { 00:12:30.126 "name": "BaseBdev3", 00:12:30.126 "uuid": "095c5881-4c23-5efd-9f2f-ab105670d46c", 00:12:30.126 "is_configured": true, 00:12:30.126 "data_offset": 2048, 00:12:30.126 "data_size": 63488 00:12:30.126 }, 00:12:30.126 { 00:12:30.126 "name": "BaseBdev4", 00:12:30.126 "uuid": "52fa80dc-b9e3-5c52-912b-92603c8154af", 00:12:30.126 "is_configured": true, 00:12:30.126 "data_offset": 2048, 00:12:30.126 "data_size": 63488 00:12:30.126 } 00:12:30.126 ] 00:12:30.126 }' 00:12:30.126 18:43:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:30.386 18:43:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:30.386 18:43:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:30.386 18:43:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:30.386 18:43:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:30.386 18:43:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.386 18:43:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:30.386 [2024-12-15 18:43:30.662039] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:30.386 18:43:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.386 18:43:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:30.386 [2024-12-15 18:43:30.711213] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:12:30.386 [2024-12-15 18:43:30.713158] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:30.386 [2024-12-15 18:43:30.821891] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:30.386 [2024-12-15 18:43:30.823142] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:30.646 [2024-12-15 18:43:31.050671] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:30.906 [2024-12-15 18:43:31.310152] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:31.166 177.67 IOPS, 533.00 MiB/s [2024-12-15T18:43:31.607Z] [2024-12-15 18:43:31.432324] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:31.166 [2024-12-15 18:43:31.432970] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:31.426 18:43:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:31.426 18:43:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:31.426 18:43:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:31.426 18:43:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:31.426 18:43:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:31.426 18:43:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:31.426 18:43:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.426 18:43:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:31.426 18:43:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:31.426 18:43:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.426 18:43:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:31.426 "name": "raid_bdev1", 00:12:31.426 "uuid": "9e25edde-dc25-4605-a3ed-0d2e387a05a0", 00:12:31.426 "strip_size_kb": 0, 00:12:31.426 "state": "online", 00:12:31.426 "raid_level": "raid1", 00:12:31.426 "superblock": true, 00:12:31.427 "num_base_bdevs": 4, 00:12:31.427 "num_base_bdevs_discovered": 4, 00:12:31.427 "num_base_bdevs_operational": 4, 00:12:31.427 "process": { 00:12:31.427 "type": "rebuild", 00:12:31.427 "target": "spare", 00:12:31.427 "progress": { 00:12:31.427 "blocks": 12288, 00:12:31.427 "percent": 19 00:12:31.427 } 00:12:31.427 }, 00:12:31.427 "base_bdevs_list": [ 00:12:31.427 { 00:12:31.427 "name": "spare", 00:12:31.427 "uuid": "e680d06c-e117-52a7-91ae-86959e6bdb22", 00:12:31.427 "is_configured": true, 00:12:31.427 "data_offset": 2048, 00:12:31.427 "data_size": 63488 00:12:31.427 }, 00:12:31.427 { 00:12:31.427 "name": "BaseBdev2", 00:12:31.427 "uuid": "57572aeb-97a9-5fff-a34a-a2842ef4c107", 00:12:31.427 "is_configured": true, 00:12:31.427 "data_offset": 2048, 00:12:31.427 "data_size": 63488 00:12:31.427 }, 00:12:31.427 { 00:12:31.427 "name": "BaseBdev3", 00:12:31.427 "uuid": "095c5881-4c23-5efd-9f2f-ab105670d46c", 00:12:31.427 "is_configured": true, 00:12:31.427 "data_offset": 2048, 00:12:31.427 "data_size": 63488 00:12:31.427 }, 00:12:31.427 { 00:12:31.427 "name": "BaseBdev4", 00:12:31.427 "uuid": "52fa80dc-b9e3-5c52-912b-92603c8154af", 00:12:31.427 "is_configured": true, 00:12:31.427 "data_offset": 2048, 00:12:31.427 "data_size": 63488 00:12:31.427 } 00:12:31.427 ] 00:12:31.427 }' 00:12:31.427 18:43:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:31.427 [2024-12-15 18:43:31.764644] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:12:31.427 18:43:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:31.427 18:43:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:31.427 18:43:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:31.427 18:43:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:12:31.427 18:43:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:12:31.427 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:12:31.427 18:43:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:12:31.427 18:43:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:31.427 18:43:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:12:31.427 18:43:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:31.427 18:43:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.427 18:43:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:31.427 [2024-12-15 18:43:31.835763] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:31.689 [2024-12-15 18:43:31.982105] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:12:31.689 [2024-12-15 18:43:32.097211] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006080 00:12:31.689 [2024-12-15 18:43:32.097262] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:12:31.689 [2024-12-15 18:43:32.097315] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:12:31.689 18:43:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.689 18:43:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:12:31.689 18:43:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:12:31.689 18:43:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:31.689 18:43:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:31.689 18:43:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:31.689 18:43:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:31.689 18:43:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:31.689 18:43:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:31.689 18:43:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:31.689 18:43:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.689 18:43:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:31.965 18:43:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.965 18:43:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:31.965 "name": "raid_bdev1", 00:12:31.965 "uuid": "9e25edde-dc25-4605-a3ed-0d2e387a05a0", 00:12:31.965 "strip_size_kb": 0, 00:12:31.965 "state": "online", 00:12:31.965 "raid_level": "raid1", 00:12:31.965 "superblock": true, 00:12:31.965 "num_base_bdevs": 4, 00:12:31.965 "num_base_bdevs_discovered": 3, 00:12:31.965 "num_base_bdevs_operational": 3, 00:12:31.965 "process": { 00:12:31.965 "type": "rebuild", 00:12:31.965 "target": "spare", 00:12:31.965 "progress": { 00:12:31.965 "blocks": 16384, 00:12:31.965 "percent": 25 00:12:31.965 } 00:12:31.965 }, 00:12:31.965 "base_bdevs_list": [ 00:12:31.965 { 00:12:31.965 "name": "spare", 00:12:31.965 "uuid": "e680d06c-e117-52a7-91ae-86959e6bdb22", 00:12:31.965 "is_configured": true, 00:12:31.965 "data_offset": 2048, 00:12:31.965 "data_size": 63488 00:12:31.965 }, 00:12:31.965 { 00:12:31.965 "name": null, 00:12:31.965 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:31.965 "is_configured": false, 00:12:31.965 "data_offset": 0, 00:12:31.965 "data_size": 63488 00:12:31.965 }, 00:12:31.965 { 00:12:31.965 "name": "BaseBdev3", 00:12:31.965 "uuid": "095c5881-4c23-5efd-9f2f-ab105670d46c", 00:12:31.965 "is_configured": true, 00:12:31.965 "data_offset": 2048, 00:12:31.965 "data_size": 63488 00:12:31.965 }, 00:12:31.965 { 00:12:31.965 "name": "BaseBdev4", 00:12:31.965 "uuid": "52fa80dc-b9e3-5c52-912b-92603c8154af", 00:12:31.965 "is_configured": true, 00:12:31.965 "data_offset": 2048, 00:12:31.965 "data_size": 63488 00:12:31.965 } 00:12:31.965 ] 00:12:31.965 }' 00:12:31.965 18:43:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:31.965 18:43:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:31.965 18:43:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:31.965 18:43:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:31.965 18:43:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=409 00:12:31.965 18:43:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:31.965 18:43:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:31.965 18:43:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:31.965 18:43:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:31.965 18:43:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:31.965 18:43:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:31.965 18:43:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:31.965 18:43:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.965 18:43:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:31.965 18:43:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:31.965 18:43:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.965 18:43:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:31.965 "name": "raid_bdev1", 00:12:31.965 "uuid": "9e25edde-dc25-4605-a3ed-0d2e387a05a0", 00:12:31.965 "strip_size_kb": 0, 00:12:31.965 "state": "online", 00:12:31.965 "raid_level": "raid1", 00:12:31.965 "superblock": true, 00:12:31.965 "num_base_bdevs": 4, 00:12:31.965 "num_base_bdevs_discovered": 3, 00:12:31.965 "num_base_bdevs_operational": 3, 00:12:31.965 "process": { 00:12:31.965 "type": "rebuild", 00:12:31.965 "target": "spare", 00:12:31.965 "progress": { 00:12:31.965 "blocks": 18432, 00:12:31.965 "percent": 29 00:12:31.965 } 00:12:31.965 }, 00:12:31.965 "base_bdevs_list": [ 00:12:31.965 { 00:12:31.965 "name": "spare", 00:12:31.966 "uuid": "e680d06c-e117-52a7-91ae-86959e6bdb22", 00:12:31.966 "is_configured": true, 00:12:31.966 "data_offset": 2048, 00:12:31.966 "data_size": 63488 00:12:31.966 }, 00:12:31.966 { 00:12:31.966 "name": null, 00:12:31.966 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:31.966 "is_configured": false, 00:12:31.966 "data_offset": 0, 00:12:31.966 "data_size": 63488 00:12:31.966 }, 00:12:31.966 { 00:12:31.966 "name": "BaseBdev3", 00:12:31.966 "uuid": "095c5881-4c23-5efd-9f2f-ab105670d46c", 00:12:31.966 "is_configured": true, 00:12:31.966 "data_offset": 2048, 00:12:31.966 "data_size": 63488 00:12:31.966 }, 00:12:31.966 { 00:12:31.966 "name": "BaseBdev4", 00:12:31.966 "uuid": "52fa80dc-b9e3-5c52-912b-92603c8154af", 00:12:31.966 "is_configured": true, 00:12:31.966 "data_offset": 2048, 00:12:31.966 "data_size": 63488 00:12:31.966 } 00:12:31.966 ] 00:12:31.966 }' 00:12:31.966 18:43:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:31.966 [2024-12-15 18:43:32.340629] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:12:31.966 [2024-12-15 18:43:32.341059] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:12:31.966 18:43:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:31.966 18:43:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:32.240 146.00 IOPS, 438.00 MiB/s [2024-12-15T18:43:32.681Z] 18:43:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:32.240 18:43:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:32.240 [2024-12-15 18:43:32.550189] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:12:32.240 [2024-12-15 18:43:32.550631] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:12:32.809 [2024-12-15 18:43:33.171725] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:12:33.070 [2024-12-15 18:43:33.280284] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:12:33.070 133.20 IOPS, 399.60 MiB/s [2024-12-15T18:43:33.511Z] 18:43:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:33.070 18:43:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:33.070 18:43:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:33.070 18:43:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:33.070 18:43:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:33.070 18:43:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:33.070 18:43:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:33.070 18:43:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:33.070 18:43:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.070 18:43:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:33.070 18:43:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.070 18:43:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:33.070 "name": "raid_bdev1", 00:12:33.070 "uuid": "9e25edde-dc25-4605-a3ed-0d2e387a05a0", 00:12:33.070 "strip_size_kb": 0, 00:12:33.070 "state": "online", 00:12:33.070 "raid_level": "raid1", 00:12:33.070 "superblock": true, 00:12:33.070 "num_base_bdevs": 4, 00:12:33.070 "num_base_bdevs_discovered": 3, 00:12:33.070 "num_base_bdevs_operational": 3, 00:12:33.070 "process": { 00:12:33.070 "type": "rebuild", 00:12:33.070 "target": "spare", 00:12:33.070 "progress": { 00:12:33.070 "blocks": 36864, 00:12:33.070 "percent": 58 00:12:33.070 } 00:12:33.070 }, 00:12:33.070 "base_bdevs_list": [ 00:12:33.070 { 00:12:33.070 "name": "spare", 00:12:33.070 "uuid": "e680d06c-e117-52a7-91ae-86959e6bdb22", 00:12:33.070 "is_configured": true, 00:12:33.070 "data_offset": 2048, 00:12:33.070 "data_size": 63488 00:12:33.070 }, 00:12:33.070 { 00:12:33.070 "name": null, 00:12:33.070 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:33.070 "is_configured": false, 00:12:33.070 "data_offset": 0, 00:12:33.070 "data_size": 63488 00:12:33.070 }, 00:12:33.070 { 00:12:33.070 "name": "BaseBdev3", 00:12:33.070 "uuid": "095c5881-4c23-5efd-9f2f-ab105670d46c", 00:12:33.070 "is_configured": true, 00:12:33.070 "data_offset": 2048, 00:12:33.070 "data_size": 63488 00:12:33.070 }, 00:12:33.070 { 00:12:33.070 "name": "BaseBdev4", 00:12:33.070 "uuid": "52fa80dc-b9e3-5c52-912b-92603c8154af", 00:12:33.070 "is_configured": true, 00:12:33.070 "data_offset": 2048, 00:12:33.070 "data_size": 63488 00:12:33.070 } 00:12:33.070 ] 00:12:33.070 }' 00:12:33.070 18:43:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:33.070 [2024-12-15 18:43:33.495410] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:12:33.070 [2024-12-15 18:43:33.496326] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:12:33.070 18:43:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:33.070 18:43:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:33.330 18:43:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:33.330 18:43:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:33.330 [2024-12-15 18:43:33.699396] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:12:33.590 [2024-12-15 18:43:34.009102] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:12:34.159 118.50 IOPS, 355.50 MiB/s [2024-12-15T18:43:34.600Z] 18:43:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:34.159 18:43:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:34.159 18:43:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:34.159 18:43:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:34.160 18:43:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:34.160 18:43:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:34.160 18:43:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:34.160 18:43:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.160 18:43:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.160 18:43:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:34.160 18:43:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.160 18:43:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:34.160 "name": "raid_bdev1", 00:12:34.160 "uuid": "9e25edde-dc25-4605-a3ed-0d2e387a05a0", 00:12:34.160 "strip_size_kb": 0, 00:12:34.160 "state": "online", 00:12:34.160 "raid_level": "raid1", 00:12:34.160 "superblock": true, 00:12:34.160 "num_base_bdevs": 4, 00:12:34.160 "num_base_bdevs_discovered": 3, 00:12:34.160 "num_base_bdevs_operational": 3, 00:12:34.160 "process": { 00:12:34.160 "type": "rebuild", 00:12:34.160 "target": "spare", 00:12:34.160 "progress": { 00:12:34.160 "blocks": 55296, 00:12:34.160 "percent": 87 00:12:34.160 } 00:12:34.160 }, 00:12:34.160 "base_bdevs_list": [ 00:12:34.160 { 00:12:34.160 "name": "spare", 00:12:34.160 "uuid": "e680d06c-e117-52a7-91ae-86959e6bdb22", 00:12:34.160 "is_configured": true, 00:12:34.160 "data_offset": 2048, 00:12:34.160 "data_size": 63488 00:12:34.160 }, 00:12:34.160 { 00:12:34.160 "name": null, 00:12:34.160 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:34.160 "is_configured": false, 00:12:34.160 "data_offset": 0, 00:12:34.160 "data_size": 63488 00:12:34.160 }, 00:12:34.160 { 00:12:34.160 "name": "BaseBdev3", 00:12:34.160 "uuid": "095c5881-4c23-5efd-9f2f-ab105670d46c", 00:12:34.160 "is_configured": true, 00:12:34.160 "data_offset": 2048, 00:12:34.160 "data_size": 63488 00:12:34.160 }, 00:12:34.160 { 00:12:34.160 "name": "BaseBdev4", 00:12:34.160 "uuid": "52fa80dc-b9e3-5c52-912b-92603c8154af", 00:12:34.160 "is_configured": true, 00:12:34.160 "data_offset": 2048, 00:12:34.160 "data_size": 63488 00:12:34.160 } 00:12:34.160 ] 00:12:34.160 }' 00:12:34.160 18:43:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:34.431 18:43:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:34.431 18:43:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:34.431 [2024-12-15 18:43:34.652143] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:12:34.431 [2024-12-15 18:43:34.652679] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:12:34.431 18:43:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:34.431 18:43:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:34.704 [2024-12-15 18:43:34.982971] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:34.704 [2024-12-15 18:43:35.082810] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:34.704 [2024-12-15 18:43:35.091562] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:35.531 106.86 IOPS, 320.57 MiB/s [2024-12-15T18:43:35.972Z] 18:43:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:35.531 18:43:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:35.531 18:43:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:35.531 18:43:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:35.531 18:43:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:35.531 18:43:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:35.531 18:43:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:35.531 18:43:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.531 18:43:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:35.531 18:43:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:35.531 18:43:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.531 18:43:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:35.531 "name": "raid_bdev1", 00:12:35.532 "uuid": "9e25edde-dc25-4605-a3ed-0d2e387a05a0", 00:12:35.532 "strip_size_kb": 0, 00:12:35.532 "state": "online", 00:12:35.532 "raid_level": "raid1", 00:12:35.532 "superblock": true, 00:12:35.532 "num_base_bdevs": 4, 00:12:35.532 "num_base_bdevs_discovered": 3, 00:12:35.532 "num_base_bdevs_operational": 3, 00:12:35.532 "base_bdevs_list": [ 00:12:35.532 { 00:12:35.532 "name": "spare", 00:12:35.532 "uuid": "e680d06c-e117-52a7-91ae-86959e6bdb22", 00:12:35.532 "is_configured": true, 00:12:35.532 "data_offset": 2048, 00:12:35.532 "data_size": 63488 00:12:35.532 }, 00:12:35.532 { 00:12:35.532 "name": null, 00:12:35.532 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:35.532 "is_configured": false, 00:12:35.532 "data_offset": 0, 00:12:35.532 "data_size": 63488 00:12:35.532 }, 00:12:35.532 { 00:12:35.532 "name": "BaseBdev3", 00:12:35.532 "uuid": "095c5881-4c23-5efd-9f2f-ab105670d46c", 00:12:35.532 "is_configured": true, 00:12:35.532 "data_offset": 2048, 00:12:35.532 "data_size": 63488 00:12:35.532 }, 00:12:35.532 { 00:12:35.532 "name": "BaseBdev4", 00:12:35.532 "uuid": "52fa80dc-b9e3-5c52-912b-92603c8154af", 00:12:35.532 "is_configured": true, 00:12:35.532 "data_offset": 2048, 00:12:35.532 "data_size": 63488 00:12:35.532 } 00:12:35.532 ] 00:12:35.532 }' 00:12:35.532 18:43:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:35.532 18:43:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:35.532 18:43:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:35.532 18:43:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:35.532 18:43:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:12:35.532 18:43:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:35.532 18:43:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:35.532 18:43:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:35.532 18:43:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:35.532 18:43:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:35.532 18:43:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:35.532 18:43:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:35.532 18:43:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.532 18:43:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:35.532 18:43:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.532 18:43:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:35.532 "name": "raid_bdev1", 00:12:35.532 "uuid": "9e25edde-dc25-4605-a3ed-0d2e387a05a0", 00:12:35.532 "strip_size_kb": 0, 00:12:35.532 "state": "online", 00:12:35.532 "raid_level": "raid1", 00:12:35.532 "superblock": true, 00:12:35.532 "num_base_bdevs": 4, 00:12:35.532 "num_base_bdevs_discovered": 3, 00:12:35.532 "num_base_bdevs_operational": 3, 00:12:35.532 "base_bdevs_list": [ 00:12:35.532 { 00:12:35.532 "name": "spare", 00:12:35.532 "uuid": "e680d06c-e117-52a7-91ae-86959e6bdb22", 00:12:35.532 "is_configured": true, 00:12:35.532 "data_offset": 2048, 00:12:35.532 "data_size": 63488 00:12:35.532 }, 00:12:35.532 { 00:12:35.532 "name": null, 00:12:35.532 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:35.532 "is_configured": false, 00:12:35.532 "data_offset": 0, 00:12:35.532 "data_size": 63488 00:12:35.532 }, 00:12:35.532 { 00:12:35.532 "name": "BaseBdev3", 00:12:35.532 "uuid": "095c5881-4c23-5efd-9f2f-ab105670d46c", 00:12:35.532 "is_configured": true, 00:12:35.532 "data_offset": 2048, 00:12:35.532 "data_size": 63488 00:12:35.532 }, 00:12:35.532 { 00:12:35.532 "name": "BaseBdev4", 00:12:35.532 "uuid": "52fa80dc-b9e3-5c52-912b-92603c8154af", 00:12:35.532 "is_configured": true, 00:12:35.532 "data_offset": 2048, 00:12:35.532 "data_size": 63488 00:12:35.532 } 00:12:35.532 ] 00:12:35.532 }' 00:12:35.532 18:43:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:35.532 18:43:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:35.532 18:43:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:35.532 18:43:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:35.532 18:43:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:35.532 18:43:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:35.532 18:43:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:35.532 18:43:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:35.532 18:43:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:35.532 18:43:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:35.532 18:43:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:35.532 18:43:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:35.532 18:43:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:35.532 18:43:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:35.792 18:43:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:35.792 18:43:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.792 18:43:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:35.792 18:43:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:35.792 18:43:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.792 18:43:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:35.792 "name": "raid_bdev1", 00:12:35.792 "uuid": "9e25edde-dc25-4605-a3ed-0d2e387a05a0", 00:12:35.792 "strip_size_kb": 0, 00:12:35.792 "state": "online", 00:12:35.792 "raid_level": "raid1", 00:12:35.792 "superblock": true, 00:12:35.792 "num_base_bdevs": 4, 00:12:35.792 "num_base_bdevs_discovered": 3, 00:12:35.792 "num_base_bdevs_operational": 3, 00:12:35.792 "base_bdevs_list": [ 00:12:35.792 { 00:12:35.792 "name": "spare", 00:12:35.792 "uuid": "e680d06c-e117-52a7-91ae-86959e6bdb22", 00:12:35.792 "is_configured": true, 00:12:35.792 "data_offset": 2048, 00:12:35.792 "data_size": 63488 00:12:35.792 }, 00:12:35.792 { 00:12:35.792 "name": null, 00:12:35.792 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:35.792 "is_configured": false, 00:12:35.792 "data_offset": 0, 00:12:35.792 "data_size": 63488 00:12:35.792 }, 00:12:35.792 { 00:12:35.792 "name": "BaseBdev3", 00:12:35.792 "uuid": "095c5881-4c23-5efd-9f2f-ab105670d46c", 00:12:35.792 "is_configured": true, 00:12:35.792 "data_offset": 2048, 00:12:35.792 "data_size": 63488 00:12:35.792 }, 00:12:35.792 { 00:12:35.792 "name": "BaseBdev4", 00:12:35.792 "uuid": "52fa80dc-b9e3-5c52-912b-92603c8154af", 00:12:35.792 "is_configured": true, 00:12:35.792 "data_offset": 2048, 00:12:35.792 "data_size": 63488 00:12:35.792 } 00:12:35.792 ] 00:12:35.792 }' 00:12:35.792 18:43:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:35.792 18:43:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:36.051 97.75 IOPS, 293.25 MiB/s [2024-12-15T18:43:36.492Z] 18:43:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:36.051 18:43:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.051 18:43:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:36.051 [2024-12-15 18:43:36.426145] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:36.051 [2024-12-15 18:43:36.426229] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:36.311 00:12:36.311 Latency(us) 00:12:36.311 [2024-12-15T18:43:36.752Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:36.311 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:12:36.311 raid_bdev1 : 8.16 96.54 289.63 0.00 0.00 13515.40 277.24 109894.43 00:12:36.311 [2024-12-15T18:43:36.752Z] =================================================================================================================== 00:12:36.311 [2024-12-15T18:43:36.752Z] Total : 96.54 289.63 0.00 0.00 13515.40 277.24 109894.43 00:12:36.311 [2024-12-15 18:43:36.493232] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:36.311 [2024-12-15 18:43:36.493327] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:36.311 [2024-12-15 18:43:36.493459] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:36.311 [2024-12-15 18:43:36.493508] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:12:36.311 { 00:12:36.311 "results": [ 00:12:36.311 { 00:12:36.311 "job": "raid_bdev1", 00:12:36.311 "core_mask": "0x1", 00:12:36.311 "workload": "randrw", 00:12:36.311 "percentage": 50, 00:12:36.311 "status": "finished", 00:12:36.311 "queue_depth": 2, 00:12:36.311 "io_size": 3145728, 00:12:36.311 "runtime": 8.162021, 00:12:36.311 "iops": 96.54471606970871, 00:12:36.311 "mibps": 289.6341482091261, 00:12:36.311 "io_failed": 0, 00:12:36.311 "io_timeout": 0, 00:12:36.311 "avg_latency_us": 13515.402903819298, 00:12:36.311 "min_latency_us": 277.2401746724891, 00:12:36.311 "max_latency_us": 109894.42794759825 00:12:36.311 } 00:12:36.311 ], 00:12:36.311 "core_count": 1 00:12:36.311 } 00:12:36.311 18:43:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.311 18:43:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:36.311 18:43:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.311 18:43:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:36.311 18:43:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:12:36.311 18:43:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.311 18:43:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:36.311 18:43:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:36.311 18:43:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:12:36.311 18:43:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:12:36.311 18:43:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:36.311 18:43:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:12:36.311 18:43:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:36.311 18:43:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:36.311 18:43:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:36.311 18:43:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:12:36.311 18:43:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:36.311 18:43:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:36.311 18:43:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:12:36.570 /dev/nbd0 00:12:36.570 18:43:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:36.570 18:43:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:36.570 18:43:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:36.570 18:43:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:12:36.570 18:43:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:36.570 18:43:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:36.570 18:43:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:36.570 18:43:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:12:36.570 18:43:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:36.570 18:43:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:36.570 18:43:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:36.570 1+0 records in 00:12:36.570 1+0 records out 00:12:36.570 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000307641 s, 13.3 MB/s 00:12:36.570 18:43:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:36.570 18:43:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:12:36.570 18:43:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:36.570 18:43:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:36.570 18:43:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:12:36.570 18:43:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:36.570 18:43:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:36.570 18:43:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:12:36.570 18:43:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:12:36.570 18:43:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@728 -- # continue 00:12:36.570 18:43:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:12:36.570 18:43:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:12:36.570 18:43:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:12:36.570 18:43:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:36.570 18:43:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:12:36.570 18:43:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:36.570 18:43:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:12:36.570 18:43:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:36.570 18:43:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:12:36.571 18:43:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:36.571 18:43:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:36.571 18:43:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:12:36.829 /dev/nbd1 00:12:36.829 18:43:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:36.829 18:43:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:36.829 18:43:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:12:36.829 18:43:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:12:36.829 18:43:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:36.829 18:43:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:36.829 18:43:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:12:36.829 18:43:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:12:36.829 18:43:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:36.829 18:43:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:36.829 18:43:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:36.829 1+0 records in 00:12:36.829 1+0 records out 00:12:36.829 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000353852 s, 11.6 MB/s 00:12:36.829 18:43:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:36.829 18:43:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:12:36.829 18:43:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:36.829 18:43:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:36.829 18:43:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:12:36.829 18:43:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:36.829 18:43:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:36.829 18:43:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:12:36.829 18:43:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:12:36.829 18:43:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:36.829 18:43:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:12:36.829 18:43:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:36.829 18:43:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:12:36.829 18:43:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:36.829 18:43:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:37.089 18:43:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:37.089 18:43:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:37.089 18:43:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:37.089 18:43:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:37.089 18:43:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:37.089 18:43:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:37.089 18:43:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:12:37.089 18:43:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:37.089 18:43:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:12:37.089 18:43:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:12:37.089 18:43:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:12:37.089 18:43:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:37.089 18:43:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:12:37.089 18:43:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:37.089 18:43:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:12:37.089 18:43:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:37.089 18:43:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:12:37.089 18:43:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:37.089 18:43:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:37.089 18:43:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:12:37.349 /dev/nbd1 00:12:37.349 18:43:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:37.349 18:43:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:37.349 18:43:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:12:37.349 18:43:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:12:37.349 18:43:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:37.349 18:43:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:37.349 18:43:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:12:37.349 18:43:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:12:37.349 18:43:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:37.349 18:43:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:37.349 18:43:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:37.349 1+0 records in 00:12:37.349 1+0 records out 00:12:37.349 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000418606 s, 9.8 MB/s 00:12:37.350 18:43:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:37.350 18:43:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:12:37.350 18:43:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:37.350 18:43:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:37.350 18:43:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:12:37.350 18:43:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:37.350 18:43:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:37.350 18:43:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:12:37.350 18:43:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:12:37.350 18:43:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:37.350 18:43:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:12:37.350 18:43:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:37.350 18:43:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:12:37.350 18:43:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:37.350 18:43:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:37.610 18:43:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:37.610 18:43:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:37.610 18:43:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:37.610 18:43:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:37.610 18:43:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:37.610 18:43:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:37.610 18:43:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:12:37.610 18:43:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:37.610 18:43:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:37.610 18:43:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:37.610 18:43:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:37.610 18:43:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:37.610 18:43:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:12:37.610 18:43:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:37.610 18:43:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:37.870 18:43:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:37.870 18:43:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:37.870 18:43:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:37.870 18:43:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:37.870 18:43:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:37.870 18:43:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:37.870 18:43:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:12:37.870 18:43:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:37.870 18:43:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:12:37.870 18:43:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:12:37.870 18:43:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.870 18:43:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:37.870 18:43:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.870 18:43:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:37.870 18:43:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.870 18:43:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:37.870 [2024-12-15 18:43:38.101065] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:37.870 [2024-12-15 18:43:38.101120] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:37.870 [2024-12-15 18:43:38.101140] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:12:37.870 [2024-12-15 18:43:38.101150] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:37.870 [2024-12-15 18:43:38.103319] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:37.870 [2024-12-15 18:43:38.103357] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:37.870 [2024-12-15 18:43:38.103441] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:12:37.870 [2024-12-15 18:43:38.103499] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:37.870 [2024-12-15 18:43:38.103613] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:37.870 [2024-12-15 18:43:38.103717] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:37.870 spare 00:12:37.870 18:43:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.870 18:43:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:12:37.870 18:43:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.870 18:43:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:37.870 [2024-12-15 18:43:38.203604] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:12:37.870 [2024-12-15 18:43:38.203636] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:37.870 [2024-12-15 18:43:38.203912] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000036fc0 00:12:37.870 [2024-12-15 18:43:38.204067] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:12:37.870 [2024-12-15 18:43:38.204101] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006600 00:12:37.870 [2024-12-15 18:43:38.204229] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:37.870 18:43:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.870 18:43:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:37.870 18:43:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:37.870 18:43:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:37.870 18:43:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:37.870 18:43:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:37.870 18:43:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:37.870 18:43:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:37.870 18:43:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:37.870 18:43:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:37.870 18:43:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:37.870 18:43:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:37.870 18:43:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.870 18:43:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:37.870 18:43:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:37.870 18:43:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.870 18:43:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:37.870 "name": "raid_bdev1", 00:12:37.870 "uuid": "9e25edde-dc25-4605-a3ed-0d2e387a05a0", 00:12:37.870 "strip_size_kb": 0, 00:12:37.870 "state": "online", 00:12:37.870 "raid_level": "raid1", 00:12:37.870 "superblock": true, 00:12:37.870 "num_base_bdevs": 4, 00:12:37.870 "num_base_bdevs_discovered": 3, 00:12:37.870 "num_base_bdevs_operational": 3, 00:12:37.870 "base_bdevs_list": [ 00:12:37.870 { 00:12:37.870 "name": "spare", 00:12:37.870 "uuid": "e680d06c-e117-52a7-91ae-86959e6bdb22", 00:12:37.870 "is_configured": true, 00:12:37.870 "data_offset": 2048, 00:12:37.870 "data_size": 63488 00:12:37.870 }, 00:12:37.870 { 00:12:37.870 "name": null, 00:12:37.870 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:37.870 "is_configured": false, 00:12:37.870 "data_offset": 2048, 00:12:37.870 "data_size": 63488 00:12:37.870 }, 00:12:37.870 { 00:12:37.870 "name": "BaseBdev3", 00:12:37.870 "uuid": "095c5881-4c23-5efd-9f2f-ab105670d46c", 00:12:37.870 "is_configured": true, 00:12:37.870 "data_offset": 2048, 00:12:37.870 "data_size": 63488 00:12:37.870 }, 00:12:37.870 { 00:12:37.870 "name": "BaseBdev4", 00:12:37.870 "uuid": "52fa80dc-b9e3-5c52-912b-92603c8154af", 00:12:37.870 "is_configured": true, 00:12:37.870 "data_offset": 2048, 00:12:37.870 "data_size": 63488 00:12:37.870 } 00:12:37.870 ] 00:12:37.870 }' 00:12:37.870 18:43:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:37.871 18:43:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:38.440 18:43:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:38.440 18:43:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:38.440 18:43:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:38.440 18:43:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:38.440 18:43:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:38.440 18:43:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.440 18:43:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:38.440 18:43:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.440 18:43:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:38.440 18:43:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.440 18:43:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:38.440 "name": "raid_bdev1", 00:12:38.440 "uuid": "9e25edde-dc25-4605-a3ed-0d2e387a05a0", 00:12:38.440 "strip_size_kb": 0, 00:12:38.440 "state": "online", 00:12:38.440 "raid_level": "raid1", 00:12:38.440 "superblock": true, 00:12:38.440 "num_base_bdevs": 4, 00:12:38.440 "num_base_bdevs_discovered": 3, 00:12:38.440 "num_base_bdevs_operational": 3, 00:12:38.440 "base_bdevs_list": [ 00:12:38.440 { 00:12:38.440 "name": "spare", 00:12:38.440 "uuid": "e680d06c-e117-52a7-91ae-86959e6bdb22", 00:12:38.440 "is_configured": true, 00:12:38.440 "data_offset": 2048, 00:12:38.440 "data_size": 63488 00:12:38.440 }, 00:12:38.440 { 00:12:38.440 "name": null, 00:12:38.440 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:38.440 "is_configured": false, 00:12:38.440 "data_offset": 2048, 00:12:38.440 "data_size": 63488 00:12:38.440 }, 00:12:38.440 { 00:12:38.440 "name": "BaseBdev3", 00:12:38.440 "uuid": "095c5881-4c23-5efd-9f2f-ab105670d46c", 00:12:38.440 "is_configured": true, 00:12:38.440 "data_offset": 2048, 00:12:38.440 "data_size": 63488 00:12:38.440 }, 00:12:38.440 { 00:12:38.440 "name": "BaseBdev4", 00:12:38.440 "uuid": "52fa80dc-b9e3-5c52-912b-92603c8154af", 00:12:38.440 "is_configured": true, 00:12:38.440 "data_offset": 2048, 00:12:38.440 "data_size": 63488 00:12:38.440 } 00:12:38.440 ] 00:12:38.440 }' 00:12:38.440 18:43:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:38.440 18:43:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:38.440 18:43:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:38.440 18:43:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:38.440 18:43:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.440 18:43:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:12:38.440 18:43:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.440 18:43:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:38.440 18:43:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.440 18:43:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:12:38.440 18:43:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:38.440 18:43:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.440 18:43:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:38.700 [2024-12-15 18:43:38.884663] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:38.700 18:43:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.700 18:43:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:38.701 18:43:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:38.701 18:43:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:38.701 18:43:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:38.701 18:43:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:38.701 18:43:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:38.701 18:43:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:38.701 18:43:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:38.701 18:43:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:38.701 18:43:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:38.701 18:43:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.701 18:43:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.701 18:43:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:38.701 18:43:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:38.701 18:43:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.701 18:43:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:38.701 "name": "raid_bdev1", 00:12:38.701 "uuid": "9e25edde-dc25-4605-a3ed-0d2e387a05a0", 00:12:38.701 "strip_size_kb": 0, 00:12:38.701 "state": "online", 00:12:38.701 "raid_level": "raid1", 00:12:38.701 "superblock": true, 00:12:38.701 "num_base_bdevs": 4, 00:12:38.701 "num_base_bdevs_discovered": 2, 00:12:38.701 "num_base_bdevs_operational": 2, 00:12:38.701 "base_bdevs_list": [ 00:12:38.701 { 00:12:38.701 "name": null, 00:12:38.701 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:38.701 "is_configured": false, 00:12:38.701 "data_offset": 0, 00:12:38.701 "data_size": 63488 00:12:38.701 }, 00:12:38.701 { 00:12:38.701 "name": null, 00:12:38.701 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:38.701 "is_configured": false, 00:12:38.701 "data_offset": 2048, 00:12:38.701 "data_size": 63488 00:12:38.701 }, 00:12:38.701 { 00:12:38.701 "name": "BaseBdev3", 00:12:38.701 "uuid": "095c5881-4c23-5efd-9f2f-ab105670d46c", 00:12:38.701 "is_configured": true, 00:12:38.701 "data_offset": 2048, 00:12:38.701 "data_size": 63488 00:12:38.701 }, 00:12:38.701 { 00:12:38.701 "name": "BaseBdev4", 00:12:38.701 "uuid": "52fa80dc-b9e3-5c52-912b-92603c8154af", 00:12:38.701 "is_configured": true, 00:12:38.701 "data_offset": 2048, 00:12:38.701 "data_size": 63488 00:12:38.701 } 00:12:38.701 ] 00:12:38.701 }' 00:12:38.701 18:43:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:38.701 18:43:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:38.960 18:43:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:38.960 18:43:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.960 18:43:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:38.960 [2024-12-15 18:43:39.332250] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:38.960 [2024-12-15 18:43:39.332449] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:12:38.960 [2024-12-15 18:43:39.332466] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:12:38.960 [2024-12-15 18:43:39.332511] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:38.960 [2024-12-15 18:43:39.337033] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037090 00:12:38.960 18:43:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.960 18:43:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:12:38.960 [2024-12-15 18:43:39.338899] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:40.340 18:43:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:40.340 18:43:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:40.340 18:43:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:40.340 18:43:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:40.340 18:43:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:40.340 18:43:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:40.340 18:43:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:40.340 18:43:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.340 18:43:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:40.340 18:43:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.340 18:43:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:40.340 "name": "raid_bdev1", 00:12:40.340 "uuid": "9e25edde-dc25-4605-a3ed-0d2e387a05a0", 00:12:40.340 "strip_size_kb": 0, 00:12:40.340 "state": "online", 00:12:40.340 "raid_level": "raid1", 00:12:40.340 "superblock": true, 00:12:40.340 "num_base_bdevs": 4, 00:12:40.340 "num_base_bdevs_discovered": 3, 00:12:40.340 "num_base_bdevs_operational": 3, 00:12:40.340 "process": { 00:12:40.340 "type": "rebuild", 00:12:40.340 "target": "spare", 00:12:40.340 "progress": { 00:12:40.340 "blocks": 20480, 00:12:40.340 "percent": 32 00:12:40.340 } 00:12:40.341 }, 00:12:40.341 "base_bdevs_list": [ 00:12:40.341 { 00:12:40.341 "name": "spare", 00:12:40.341 "uuid": "e680d06c-e117-52a7-91ae-86959e6bdb22", 00:12:40.341 "is_configured": true, 00:12:40.341 "data_offset": 2048, 00:12:40.341 "data_size": 63488 00:12:40.341 }, 00:12:40.341 { 00:12:40.341 "name": null, 00:12:40.341 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:40.341 "is_configured": false, 00:12:40.341 "data_offset": 2048, 00:12:40.341 "data_size": 63488 00:12:40.341 }, 00:12:40.341 { 00:12:40.341 "name": "BaseBdev3", 00:12:40.341 "uuid": "095c5881-4c23-5efd-9f2f-ab105670d46c", 00:12:40.341 "is_configured": true, 00:12:40.341 "data_offset": 2048, 00:12:40.341 "data_size": 63488 00:12:40.341 }, 00:12:40.341 { 00:12:40.341 "name": "BaseBdev4", 00:12:40.341 "uuid": "52fa80dc-b9e3-5c52-912b-92603c8154af", 00:12:40.341 "is_configured": true, 00:12:40.341 "data_offset": 2048, 00:12:40.341 "data_size": 63488 00:12:40.341 } 00:12:40.341 ] 00:12:40.341 }' 00:12:40.341 18:43:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:40.341 18:43:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:40.341 18:43:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:40.341 18:43:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:40.341 18:43:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:12:40.341 18:43:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.341 18:43:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:40.341 [2024-12-15 18:43:40.503557] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:40.341 [2024-12-15 18:43:40.543417] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:40.341 [2024-12-15 18:43:40.543475] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:40.341 [2024-12-15 18:43:40.543491] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:40.341 [2024-12-15 18:43:40.543499] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:40.341 18:43:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.341 18:43:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:40.341 18:43:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:40.341 18:43:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:40.341 18:43:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:40.341 18:43:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:40.341 18:43:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:40.341 18:43:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:40.341 18:43:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:40.341 18:43:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:40.341 18:43:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:40.341 18:43:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:40.341 18:43:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.341 18:43:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:40.341 18:43:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:40.341 18:43:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.341 18:43:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:40.341 "name": "raid_bdev1", 00:12:40.341 "uuid": "9e25edde-dc25-4605-a3ed-0d2e387a05a0", 00:12:40.341 "strip_size_kb": 0, 00:12:40.341 "state": "online", 00:12:40.341 "raid_level": "raid1", 00:12:40.341 "superblock": true, 00:12:40.341 "num_base_bdevs": 4, 00:12:40.341 "num_base_bdevs_discovered": 2, 00:12:40.341 "num_base_bdevs_operational": 2, 00:12:40.341 "base_bdevs_list": [ 00:12:40.341 { 00:12:40.341 "name": null, 00:12:40.341 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:40.341 "is_configured": false, 00:12:40.341 "data_offset": 0, 00:12:40.341 "data_size": 63488 00:12:40.341 }, 00:12:40.341 { 00:12:40.341 "name": null, 00:12:40.341 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:40.341 "is_configured": false, 00:12:40.341 "data_offset": 2048, 00:12:40.341 "data_size": 63488 00:12:40.341 }, 00:12:40.341 { 00:12:40.341 "name": "BaseBdev3", 00:12:40.341 "uuid": "095c5881-4c23-5efd-9f2f-ab105670d46c", 00:12:40.341 "is_configured": true, 00:12:40.341 "data_offset": 2048, 00:12:40.341 "data_size": 63488 00:12:40.341 }, 00:12:40.341 { 00:12:40.341 "name": "BaseBdev4", 00:12:40.341 "uuid": "52fa80dc-b9e3-5c52-912b-92603c8154af", 00:12:40.341 "is_configured": true, 00:12:40.341 "data_offset": 2048, 00:12:40.341 "data_size": 63488 00:12:40.341 } 00:12:40.341 ] 00:12:40.341 }' 00:12:40.341 18:43:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:40.341 18:43:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:40.601 18:43:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:40.601 18:43:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.601 18:43:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:40.601 [2024-12-15 18:43:40.991434] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:40.601 [2024-12-15 18:43:40.991499] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:40.601 [2024-12-15 18:43:40.991524] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:12:40.601 [2024-12-15 18:43:40.991535] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:40.601 [2024-12-15 18:43:40.991976] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:40.601 [2024-12-15 18:43:40.992007] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:40.601 [2024-12-15 18:43:40.992092] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:12:40.601 [2024-12-15 18:43:40.992107] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:12:40.601 [2024-12-15 18:43:40.992116] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:12:40.601 [2024-12-15 18:43:40.992150] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:40.601 [2024-12-15 18:43:40.996284] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037160 00:12:40.601 spare 00:12:40.601 18:43:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.601 18:43:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:12:40.601 [2024-12-15 18:43:40.998165] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:41.982 18:43:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:41.982 18:43:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:41.982 18:43:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:41.982 18:43:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:41.982 18:43:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:41.982 18:43:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:41.982 18:43:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:41.982 18:43:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.982 18:43:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:41.982 18:43:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.982 18:43:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:41.982 "name": "raid_bdev1", 00:12:41.982 "uuid": "9e25edde-dc25-4605-a3ed-0d2e387a05a0", 00:12:41.982 "strip_size_kb": 0, 00:12:41.982 "state": "online", 00:12:41.982 "raid_level": "raid1", 00:12:41.982 "superblock": true, 00:12:41.982 "num_base_bdevs": 4, 00:12:41.982 "num_base_bdevs_discovered": 3, 00:12:41.982 "num_base_bdevs_operational": 3, 00:12:41.982 "process": { 00:12:41.982 "type": "rebuild", 00:12:41.982 "target": "spare", 00:12:41.982 "progress": { 00:12:41.982 "blocks": 20480, 00:12:41.982 "percent": 32 00:12:41.982 } 00:12:41.982 }, 00:12:41.982 "base_bdevs_list": [ 00:12:41.982 { 00:12:41.982 "name": "spare", 00:12:41.982 "uuid": "e680d06c-e117-52a7-91ae-86959e6bdb22", 00:12:41.982 "is_configured": true, 00:12:41.982 "data_offset": 2048, 00:12:41.982 "data_size": 63488 00:12:41.982 }, 00:12:41.982 { 00:12:41.982 "name": null, 00:12:41.982 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:41.982 "is_configured": false, 00:12:41.982 "data_offset": 2048, 00:12:41.982 "data_size": 63488 00:12:41.982 }, 00:12:41.982 { 00:12:41.982 "name": "BaseBdev3", 00:12:41.982 "uuid": "095c5881-4c23-5efd-9f2f-ab105670d46c", 00:12:41.982 "is_configured": true, 00:12:41.982 "data_offset": 2048, 00:12:41.982 "data_size": 63488 00:12:41.982 }, 00:12:41.982 { 00:12:41.982 "name": "BaseBdev4", 00:12:41.982 "uuid": "52fa80dc-b9e3-5c52-912b-92603c8154af", 00:12:41.982 "is_configured": true, 00:12:41.982 "data_offset": 2048, 00:12:41.982 "data_size": 63488 00:12:41.982 } 00:12:41.982 ] 00:12:41.982 }' 00:12:41.982 18:43:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:41.982 18:43:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:41.982 18:43:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:41.982 18:43:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:41.982 18:43:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:12:41.982 18:43:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.982 18:43:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:41.982 [2024-12-15 18:43:42.162523] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:41.982 [2024-12-15 18:43:42.202370] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:41.983 [2024-12-15 18:43:42.202423] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:41.983 [2024-12-15 18:43:42.202439] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:41.983 [2024-12-15 18:43:42.202445] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:41.983 18:43:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.983 18:43:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:41.983 18:43:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:41.983 18:43:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:41.983 18:43:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:41.983 18:43:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:41.983 18:43:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:41.983 18:43:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:41.983 18:43:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:41.983 18:43:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:41.983 18:43:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:41.983 18:43:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:41.983 18:43:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.983 18:43:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:41.983 18:43:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:41.983 18:43:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.983 18:43:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:41.983 "name": "raid_bdev1", 00:12:41.983 "uuid": "9e25edde-dc25-4605-a3ed-0d2e387a05a0", 00:12:41.983 "strip_size_kb": 0, 00:12:41.983 "state": "online", 00:12:41.983 "raid_level": "raid1", 00:12:41.983 "superblock": true, 00:12:41.983 "num_base_bdevs": 4, 00:12:41.983 "num_base_bdevs_discovered": 2, 00:12:41.983 "num_base_bdevs_operational": 2, 00:12:41.983 "base_bdevs_list": [ 00:12:41.983 { 00:12:41.983 "name": null, 00:12:41.983 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:41.983 "is_configured": false, 00:12:41.983 "data_offset": 0, 00:12:41.983 "data_size": 63488 00:12:41.983 }, 00:12:41.983 { 00:12:41.983 "name": null, 00:12:41.983 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:41.983 "is_configured": false, 00:12:41.983 "data_offset": 2048, 00:12:41.983 "data_size": 63488 00:12:41.983 }, 00:12:41.983 { 00:12:41.983 "name": "BaseBdev3", 00:12:41.983 "uuid": "095c5881-4c23-5efd-9f2f-ab105670d46c", 00:12:41.983 "is_configured": true, 00:12:41.983 "data_offset": 2048, 00:12:41.983 "data_size": 63488 00:12:41.983 }, 00:12:41.983 { 00:12:41.983 "name": "BaseBdev4", 00:12:41.983 "uuid": "52fa80dc-b9e3-5c52-912b-92603c8154af", 00:12:41.983 "is_configured": true, 00:12:41.983 "data_offset": 2048, 00:12:41.983 "data_size": 63488 00:12:41.983 } 00:12:41.983 ] 00:12:41.983 }' 00:12:41.983 18:43:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:41.983 18:43:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:42.243 18:43:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:42.243 18:43:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:42.243 18:43:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:42.243 18:43:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:42.243 18:43:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:42.243 18:43:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:42.243 18:43:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.243 18:43:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:42.243 18:43:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:42.243 18:43:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.243 18:43:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:42.243 "name": "raid_bdev1", 00:12:42.243 "uuid": "9e25edde-dc25-4605-a3ed-0d2e387a05a0", 00:12:42.243 "strip_size_kb": 0, 00:12:42.243 "state": "online", 00:12:42.243 "raid_level": "raid1", 00:12:42.243 "superblock": true, 00:12:42.243 "num_base_bdevs": 4, 00:12:42.243 "num_base_bdevs_discovered": 2, 00:12:42.243 "num_base_bdevs_operational": 2, 00:12:42.243 "base_bdevs_list": [ 00:12:42.243 { 00:12:42.243 "name": null, 00:12:42.243 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:42.243 "is_configured": false, 00:12:42.243 "data_offset": 0, 00:12:42.243 "data_size": 63488 00:12:42.243 }, 00:12:42.243 { 00:12:42.243 "name": null, 00:12:42.243 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:42.243 "is_configured": false, 00:12:42.243 "data_offset": 2048, 00:12:42.243 "data_size": 63488 00:12:42.243 }, 00:12:42.243 { 00:12:42.243 "name": "BaseBdev3", 00:12:42.243 "uuid": "095c5881-4c23-5efd-9f2f-ab105670d46c", 00:12:42.243 "is_configured": true, 00:12:42.243 "data_offset": 2048, 00:12:42.243 "data_size": 63488 00:12:42.243 }, 00:12:42.243 { 00:12:42.243 "name": "BaseBdev4", 00:12:42.243 "uuid": "52fa80dc-b9e3-5c52-912b-92603c8154af", 00:12:42.243 "is_configured": true, 00:12:42.243 "data_offset": 2048, 00:12:42.243 "data_size": 63488 00:12:42.243 } 00:12:42.243 ] 00:12:42.243 }' 00:12:42.243 18:43:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:42.243 18:43:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:42.243 18:43:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:42.503 18:43:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:42.503 18:43:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:12:42.503 18:43:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.503 18:43:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:42.503 18:43:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.503 18:43:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:42.503 18:43:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.503 18:43:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:42.503 [2024-12-15 18:43:42.725992] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:42.503 [2024-12-15 18:43:42.726064] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:42.503 [2024-12-15 18:43:42.726086] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:12:42.503 [2024-12-15 18:43:42.726095] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:42.503 [2024-12-15 18:43:42.726508] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:42.503 [2024-12-15 18:43:42.726536] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:42.503 [2024-12-15 18:43:42.726610] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:12:42.503 [2024-12-15 18:43:42.726626] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:12:42.503 [2024-12-15 18:43:42.726637] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:12:42.503 [2024-12-15 18:43:42.726648] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:12:42.503 BaseBdev1 00:12:42.503 18:43:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.503 18:43:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:12:43.442 18:43:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:43.442 18:43:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:43.442 18:43:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:43.442 18:43:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:43.442 18:43:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:43.442 18:43:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:43.442 18:43:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:43.442 18:43:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:43.442 18:43:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:43.442 18:43:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:43.442 18:43:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:43.442 18:43:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:43.442 18:43:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.442 18:43:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:43.442 18:43:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.442 18:43:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:43.442 "name": "raid_bdev1", 00:12:43.442 "uuid": "9e25edde-dc25-4605-a3ed-0d2e387a05a0", 00:12:43.442 "strip_size_kb": 0, 00:12:43.442 "state": "online", 00:12:43.442 "raid_level": "raid1", 00:12:43.442 "superblock": true, 00:12:43.442 "num_base_bdevs": 4, 00:12:43.442 "num_base_bdevs_discovered": 2, 00:12:43.442 "num_base_bdevs_operational": 2, 00:12:43.442 "base_bdevs_list": [ 00:12:43.442 { 00:12:43.442 "name": null, 00:12:43.442 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:43.442 "is_configured": false, 00:12:43.442 "data_offset": 0, 00:12:43.442 "data_size": 63488 00:12:43.442 }, 00:12:43.442 { 00:12:43.442 "name": null, 00:12:43.442 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:43.442 "is_configured": false, 00:12:43.442 "data_offset": 2048, 00:12:43.442 "data_size": 63488 00:12:43.442 }, 00:12:43.442 { 00:12:43.442 "name": "BaseBdev3", 00:12:43.442 "uuid": "095c5881-4c23-5efd-9f2f-ab105670d46c", 00:12:43.442 "is_configured": true, 00:12:43.442 "data_offset": 2048, 00:12:43.442 "data_size": 63488 00:12:43.442 }, 00:12:43.442 { 00:12:43.442 "name": "BaseBdev4", 00:12:43.442 "uuid": "52fa80dc-b9e3-5c52-912b-92603c8154af", 00:12:43.442 "is_configured": true, 00:12:43.442 "data_offset": 2048, 00:12:43.442 "data_size": 63488 00:12:43.442 } 00:12:43.442 ] 00:12:43.442 }' 00:12:43.442 18:43:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:43.442 18:43:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:44.012 18:43:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:44.012 18:43:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:44.012 18:43:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:44.012 18:43:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:44.012 18:43:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:44.012 18:43:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:44.012 18:43:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.012 18:43:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:44.012 18:43:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:44.012 18:43:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.012 18:43:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:44.012 "name": "raid_bdev1", 00:12:44.012 "uuid": "9e25edde-dc25-4605-a3ed-0d2e387a05a0", 00:12:44.012 "strip_size_kb": 0, 00:12:44.012 "state": "online", 00:12:44.012 "raid_level": "raid1", 00:12:44.012 "superblock": true, 00:12:44.012 "num_base_bdevs": 4, 00:12:44.012 "num_base_bdevs_discovered": 2, 00:12:44.012 "num_base_bdevs_operational": 2, 00:12:44.012 "base_bdevs_list": [ 00:12:44.012 { 00:12:44.012 "name": null, 00:12:44.012 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:44.012 "is_configured": false, 00:12:44.012 "data_offset": 0, 00:12:44.012 "data_size": 63488 00:12:44.012 }, 00:12:44.012 { 00:12:44.012 "name": null, 00:12:44.012 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:44.012 "is_configured": false, 00:12:44.012 "data_offset": 2048, 00:12:44.012 "data_size": 63488 00:12:44.012 }, 00:12:44.012 { 00:12:44.012 "name": "BaseBdev3", 00:12:44.012 "uuid": "095c5881-4c23-5efd-9f2f-ab105670d46c", 00:12:44.012 "is_configured": true, 00:12:44.012 "data_offset": 2048, 00:12:44.012 "data_size": 63488 00:12:44.012 }, 00:12:44.012 { 00:12:44.012 "name": "BaseBdev4", 00:12:44.012 "uuid": "52fa80dc-b9e3-5c52-912b-92603c8154af", 00:12:44.012 "is_configured": true, 00:12:44.012 "data_offset": 2048, 00:12:44.012 "data_size": 63488 00:12:44.012 } 00:12:44.012 ] 00:12:44.012 }' 00:12:44.012 18:43:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:44.013 18:43:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:44.013 18:43:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:44.013 18:43:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:44.013 18:43:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:44.013 18:43:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:12:44.013 18:43:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:44.013 18:43:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:12:44.013 18:43:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:44.013 18:43:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:12:44.013 18:43:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:44.013 18:43:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:44.013 18:43:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.013 18:43:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:44.013 [2024-12-15 18:43:44.347591] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:44.013 [2024-12-15 18:43:44.347798] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:12:44.013 [2024-12-15 18:43:44.347869] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:12:44.013 request: 00:12:44.013 { 00:12:44.013 "base_bdev": "BaseBdev1", 00:12:44.013 "raid_bdev": "raid_bdev1", 00:12:44.013 "method": "bdev_raid_add_base_bdev", 00:12:44.013 "req_id": 1 00:12:44.013 } 00:12:44.013 Got JSON-RPC error response 00:12:44.013 response: 00:12:44.013 { 00:12:44.013 "code": -22, 00:12:44.013 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:12:44.013 } 00:12:44.013 18:43:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:12:44.013 18:43:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:12:44.013 18:43:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:44.013 18:43:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:44.013 18:43:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:44.013 18:43:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:12:44.952 18:43:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:44.952 18:43:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:44.952 18:43:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:44.952 18:43:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:44.952 18:43:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:44.952 18:43:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:44.952 18:43:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:44.952 18:43:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:44.952 18:43:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:44.952 18:43:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:44.952 18:43:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:44.952 18:43:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:44.952 18:43:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.952 18:43:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:44.952 18:43:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.211 18:43:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:45.211 "name": "raid_bdev1", 00:12:45.211 "uuid": "9e25edde-dc25-4605-a3ed-0d2e387a05a0", 00:12:45.211 "strip_size_kb": 0, 00:12:45.211 "state": "online", 00:12:45.211 "raid_level": "raid1", 00:12:45.211 "superblock": true, 00:12:45.211 "num_base_bdevs": 4, 00:12:45.211 "num_base_bdevs_discovered": 2, 00:12:45.211 "num_base_bdevs_operational": 2, 00:12:45.211 "base_bdevs_list": [ 00:12:45.211 { 00:12:45.211 "name": null, 00:12:45.211 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:45.211 "is_configured": false, 00:12:45.211 "data_offset": 0, 00:12:45.211 "data_size": 63488 00:12:45.211 }, 00:12:45.211 { 00:12:45.211 "name": null, 00:12:45.211 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:45.211 "is_configured": false, 00:12:45.211 "data_offset": 2048, 00:12:45.211 "data_size": 63488 00:12:45.211 }, 00:12:45.211 { 00:12:45.211 "name": "BaseBdev3", 00:12:45.211 "uuid": "095c5881-4c23-5efd-9f2f-ab105670d46c", 00:12:45.211 "is_configured": true, 00:12:45.211 "data_offset": 2048, 00:12:45.211 "data_size": 63488 00:12:45.211 }, 00:12:45.211 { 00:12:45.211 "name": "BaseBdev4", 00:12:45.211 "uuid": "52fa80dc-b9e3-5c52-912b-92603c8154af", 00:12:45.211 "is_configured": true, 00:12:45.211 "data_offset": 2048, 00:12:45.211 "data_size": 63488 00:12:45.211 } 00:12:45.211 ] 00:12:45.211 }' 00:12:45.211 18:43:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:45.211 18:43:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:45.470 18:43:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:45.470 18:43:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:45.470 18:43:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:45.470 18:43:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:45.470 18:43:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:45.470 18:43:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:45.470 18:43:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.470 18:43:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:45.470 18:43:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:45.470 18:43:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.470 18:43:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:45.470 "name": "raid_bdev1", 00:12:45.470 "uuid": "9e25edde-dc25-4605-a3ed-0d2e387a05a0", 00:12:45.470 "strip_size_kb": 0, 00:12:45.470 "state": "online", 00:12:45.470 "raid_level": "raid1", 00:12:45.470 "superblock": true, 00:12:45.470 "num_base_bdevs": 4, 00:12:45.470 "num_base_bdevs_discovered": 2, 00:12:45.470 "num_base_bdevs_operational": 2, 00:12:45.470 "base_bdevs_list": [ 00:12:45.470 { 00:12:45.470 "name": null, 00:12:45.470 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:45.470 "is_configured": false, 00:12:45.470 "data_offset": 0, 00:12:45.470 "data_size": 63488 00:12:45.470 }, 00:12:45.470 { 00:12:45.470 "name": null, 00:12:45.470 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:45.470 "is_configured": false, 00:12:45.470 "data_offset": 2048, 00:12:45.470 "data_size": 63488 00:12:45.470 }, 00:12:45.470 { 00:12:45.470 "name": "BaseBdev3", 00:12:45.470 "uuid": "095c5881-4c23-5efd-9f2f-ab105670d46c", 00:12:45.470 "is_configured": true, 00:12:45.470 "data_offset": 2048, 00:12:45.470 "data_size": 63488 00:12:45.470 }, 00:12:45.470 { 00:12:45.470 "name": "BaseBdev4", 00:12:45.470 "uuid": "52fa80dc-b9e3-5c52-912b-92603c8154af", 00:12:45.470 "is_configured": true, 00:12:45.470 "data_offset": 2048, 00:12:45.470 "data_size": 63488 00:12:45.470 } 00:12:45.470 ] 00:12:45.470 }' 00:12:45.470 18:43:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:45.470 18:43:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:45.470 18:43:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:45.730 18:43:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:45.730 18:43:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 91683 00:12:45.730 18:43:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 91683 ']' 00:12:45.730 18:43:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 91683 00:12:45.730 18:43:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:12:45.730 18:43:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:45.730 18:43:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 91683 00:12:45.730 18:43:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:45.730 killing process with pid 91683 00:12:45.730 Received shutdown signal, test time was about 17.638090 seconds 00:12:45.730 00:12:45.730 Latency(us) 00:12:45.730 [2024-12-15T18:43:46.171Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:45.730 [2024-12-15T18:43:46.171Z] =================================================================================================================== 00:12:45.730 [2024-12-15T18:43:46.171Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:45.730 18:43:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:45.730 18:43:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 91683' 00:12:45.730 18:43:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 91683 00:12:45.730 [2024-12-15 18:43:45.947775] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:45.730 [2024-12-15 18:43:45.947927] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:45.730 18:43:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 91683 00:12:45.730 [2024-12-15 18:43:45.947993] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:45.730 [2024-12-15 18:43:45.948005] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state offline 00:12:45.730 [2024-12-15 18:43:45.994880] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:45.991 18:43:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:12:45.991 ************************************ 00:12:45.991 END TEST raid_rebuild_test_sb_io 00:12:45.991 ************************************ 00:12:45.991 00:12:45.991 real 0m19.621s 00:12:45.991 user 0m26.110s 00:12:45.991 sys 0m2.521s 00:12:45.991 18:43:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:45.991 18:43:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:45.991 18:43:46 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:12:45.991 18:43:46 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:12:45.991 18:43:46 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:45.991 18:43:46 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:45.991 18:43:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:45.991 ************************************ 00:12:45.991 START TEST raid5f_state_function_test 00:12:45.991 ************************************ 00:12:45.991 18:43:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 false 00:12:45.991 18:43:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:12:45.991 18:43:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:12:45.991 18:43:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:12:45.991 18:43:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:45.991 18:43:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:45.991 18:43:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:45.991 18:43:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:45.991 18:43:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:45.991 18:43:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:45.991 18:43:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:45.991 18:43:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:45.991 18:43:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:45.991 18:43:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:45.991 18:43:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:45.991 18:43:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:45.991 18:43:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:12:45.991 18:43:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:45.991 18:43:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:45.991 18:43:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:45.991 18:43:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:45.991 18:43:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:45.991 18:43:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:12:45.991 18:43:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:12:45.991 18:43:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:12:45.991 18:43:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:12:45.991 18:43:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:12:45.991 Process raid pid: 92388 00:12:45.991 18:43:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=92388 00:12:45.991 18:43:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:45.991 18:43:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 92388' 00:12:45.991 18:43:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 92388 00:12:45.991 18:43:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 92388 ']' 00:12:45.991 18:43:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:45.991 18:43:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:45.991 18:43:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:45.991 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:45.991 18:43:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:45.991 18:43:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.991 [2024-12-15 18:43:46.388236] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:12:45.991 [2024-12-15 18:43:46.388495] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:46.250 [2024-12-15 18:43:46.567716] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:46.250 [2024-12-15 18:43:46.592350] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:12:46.250 [2024-12-15 18:43:46.634676] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:46.250 [2024-12-15 18:43:46.634793] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:46.821 18:43:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:46.821 18:43:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:12:46.821 18:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:46.821 18:43:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.821 18:43:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.821 [2024-12-15 18:43:47.197751] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:46.821 [2024-12-15 18:43:47.197866] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:46.821 [2024-12-15 18:43:47.197883] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:46.821 [2024-12-15 18:43:47.197893] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:46.821 [2024-12-15 18:43:47.197899] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:46.821 [2024-12-15 18:43:47.197911] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:46.821 18:43:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.821 18:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:12:46.821 18:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:46.821 18:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:46.821 18:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:46.821 18:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:46.821 18:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:46.821 18:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:46.821 18:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:46.821 18:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:46.821 18:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:46.821 18:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:46.822 18:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:46.822 18:43:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.822 18:43:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.822 18:43:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.822 18:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:46.822 "name": "Existed_Raid", 00:12:46.822 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:46.822 "strip_size_kb": 64, 00:12:46.822 "state": "configuring", 00:12:46.822 "raid_level": "raid5f", 00:12:46.822 "superblock": false, 00:12:46.822 "num_base_bdevs": 3, 00:12:46.822 "num_base_bdevs_discovered": 0, 00:12:46.822 "num_base_bdevs_operational": 3, 00:12:46.822 "base_bdevs_list": [ 00:12:46.822 { 00:12:46.822 "name": "BaseBdev1", 00:12:46.822 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:46.822 "is_configured": false, 00:12:46.822 "data_offset": 0, 00:12:46.822 "data_size": 0 00:12:46.822 }, 00:12:46.822 { 00:12:46.822 "name": "BaseBdev2", 00:12:46.822 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:46.822 "is_configured": false, 00:12:46.822 "data_offset": 0, 00:12:46.822 "data_size": 0 00:12:46.822 }, 00:12:46.822 { 00:12:46.822 "name": "BaseBdev3", 00:12:46.822 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:46.822 "is_configured": false, 00:12:46.822 "data_offset": 0, 00:12:46.822 "data_size": 0 00:12:46.822 } 00:12:46.822 ] 00:12:46.822 }' 00:12:46.822 18:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:46.822 18:43:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.396 18:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:47.396 18:43:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.396 18:43:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.396 [2024-12-15 18:43:47.672861] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:47.396 [2024-12-15 18:43:47.672905] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:12:47.396 18:43:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.396 18:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:47.396 18:43:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.396 18:43:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.396 [2024-12-15 18:43:47.684850] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:47.396 [2024-12-15 18:43:47.684889] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:47.396 [2024-12-15 18:43:47.684898] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:47.396 [2024-12-15 18:43:47.684906] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:47.396 [2024-12-15 18:43:47.684912] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:47.396 [2024-12-15 18:43:47.684921] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:47.396 18:43:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.396 18:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:47.396 18:43:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.396 18:43:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.396 [2024-12-15 18:43:47.705727] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:47.396 BaseBdev1 00:12:47.396 18:43:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.396 18:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:47.396 18:43:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:47.396 18:43:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:47.396 18:43:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:47.396 18:43:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:47.396 18:43:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:47.396 18:43:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:47.396 18:43:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.396 18:43:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.396 18:43:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.396 18:43:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:47.396 18:43:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.396 18:43:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.396 [ 00:12:47.396 { 00:12:47.396 "name": "BaseBdev1", 00:12:47.396 "aliases": [ 00:12:47.396 "aa804df4-c7bb-4405-afd8-b53d34d59f84" 00:12:47.396 ], 00:12:47.396 "product_name": "Malloc disk", 00:12:47.396 "block_size": 512, 00:12:47.396 "num_blocks": 65536, 00:12:47.396 "uuid": "aa804df4-c7bb-4405-afd8-b53d34d59f84", 00:12:47.396 "assigned_rate_limits": { 00:12:47.396 "rw_ios_per_sec": 0, 00:12:47.396 "rw_mbytes_per_sec": 0, 00:12:47.396 "r_mbytes_per_sec": 0, 00:12:47.396 "w_mbytes_per_sec": 0 00:12:47.396 }, 00:12:47.396 "claimed": true, 00:12:47.396 "claim_type": "exclusive_write", 00:12:47.396 "zoned": false, 00:12:47.396 "supported_io_types": { 00:12:47.396 "read": true, 00:12:47.396 "write": true, 00:12:47.396 "unmap": true, 00:12:47.396 "flush": true, 00:12:47.396 "reset": true, 00:12:47.396 "nvme_admin": false, 00:12:47.396 "nvme_io": false, 00:12:47.396 "nvme_io_md": false, 00:12:47.396 "write_zeroes": true, 00:12:47.396 "zcopy": true, 00:12:47.396 "get_zone_info": false, 00:12:47.396 "zone_management": false, 00:12:47.396 "zone_append": false, 00:12:47.396 "compare": false, 00:12:47.396 "compare_and_write": false, 00:12:47.396 "abort": true, 00:12:47.396 "seek_hole": false, 00:12:47.396 "seek_data": false, 00:12:47.396 "copy": true, 00:12:47.396 "nvme_iov_md": false 00:12:47.396 }, 00:12:47.396 "memory_domains": [ 00:12:47.396 { 00:12:47.396 "dma_device_id": "system", 00:12:47.396 "dma_device_type": 1 00:12:47.396 }, 00:12:47.396 { 00:12:47.396 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:47.396 "dma_device_type": 2 00:12:47.396 } 00:12:47.396 ], 00:12:47.396 "driver_specific": {} 00:12:47.396 } 00:12:47.396 ] 00:12:47.396 18:43:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.396 18:43:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:47.396 18:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:12:47.396 18:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:47.396 18:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:47.396 18:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:47.396 18:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:47.396 18:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:47.396 18:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:47.396 18:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:47.396 18:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:47.396 18:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:47.396 18:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:47.396 18:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:47.396 18:43:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.396 18:43:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.396 18:43:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.396 18:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:47.397 "name": "Existed_Raid", 00:12:47.397 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:47.397 "strip_size_kb": 64, 00:12:47.397 "state": "configuring", 00:12:47.397 "raid_level": "raid5f", 00:12:47.397 "superblock": false, 00:12:47.397 "num_base_bdevs": 3, 00:12:47.397 "num_base_bdevs_discovered": 1, 00:12:47.397 "num_base_bdevs_operational": 3, 00:12:47.397 "base_bdevs_list": [ 00:12:47.397 { 00:12:47.397 "name": "BaseBdev1", 00:12:47.397 "uuid": "aa804df4-c7bb-4405-afd8-b53d34d59f84", 00:12:47.397 "is_configured": true, 00:12:47.397 "data_offset": 0, 00:12:47.397 "data_size": 65536 00:12:47.397 }, 00:12:47.397 { 00:12:47.397 "name": "BaseBdev2", 00:12:47.397 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:47.397 "is_configured": false, 00:12:47.397 "data_offset": 0, 00:12:47.397 "data_size": 0 00:12:47.397 }, 00:12:47.397 { 00:12:47.397 "name": "BaseBdev3", 00:12:47.397 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:47.397 "is_configured": false, 00:12:47.397 "data_offset": 0, 00:12:47.397 "data_size": 0 00:12:47.397 } 00:12:47.397 ] 00:12:47.397 }' 00:12:47.397 18:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:47.397 18:43:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.966 18:43:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:47.966 18:43:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.966 18:43:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.966 [2024-12-15 18:43:48.192891] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:47.966 [2024-12-15 18:43:48.192985] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:12:47.966 18:43:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.966 18:43:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:47.966 18:43:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.966 18:43:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.966 [2024-12-15 18:43:48.204917] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:47.966 [2024-12-15 18:43:48.206734] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:47.966 [2024-12-15 18:43:48.206817] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:47.966 [2024-12-15 18:43:48.206849] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:47.966 [2024-12-15 18:43:48.206873] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:47.966 18:43:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.966 18:43:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:47.966 18:43:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:47.966 18:43:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:12:47.966 18:43:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:47.966 18:43:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:47.966 18:43:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:47.966 18:43:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:47.966 18:43:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:47.966 18:43:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:47.966 18:43:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:47.966 18:43:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:47.966 18:43:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:47.966 18:43:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:47.966 18:43:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:47.967 18:43:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.967 18:43:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.967 18:43:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.967 18:43:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:47.967 "name": "Existed_Raid", 00:12:47.967 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:47.967 "strip_size_kb": 64, 00:12:47.967 "state": "configuring", 00:12:47.967 "raid_level": "raid5f", 00:12:47.967 "superblock": false, 00:12:47.967 "num_base_bdevs": 3, 00:12:47.967 "num_base_bdevs_discovered": 1, 00:12:47.967 "num_base_bdevs_operational": 3, 00:12:47.967 "base_bdevs_list": [ 00:12:47.967 { 00:12:47.967 "name": "BaseBdev1", 00:12:47.967 "uuid": "aa804df4-c7bb-4405-afd8-b53d34d59f84", 00:12:47.967 "is_configured": true, 00:12:47.967 "data_offset": 0, 00:12:47.967 "data_size": 65536 00:12:47.967 }, 00:12:47.967 { 00:12:47.967 "name": "BaseBdev2", 00:12:47.967 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:47.967 "is_configured": false, 00:12:47.967 "data_offset": 0, 00:12:47.967 "data_size": 0 00:12:47.967 }, 00:12:47.967 { 00:12:47.967 "name": "BaseBdev3", 00:12:47.967 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:47.967 "is_configured": false, 00:12:47.967 "data_offset": 0, 00:12:47.967 "data_size": 0 00:12:47.967 } 00:12:47.967 ] 00:12:47.967 }' 00:12:47.967 18:43:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:47.967 18:43:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.226 18:43:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:48.226 18:43:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.226 18:43:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.486 BaseBdev2 00:12:48.486 [2024-12-15 18:43:48.675141] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:48.486 18:43:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.486 18:43:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:48.486 18:43:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:48.486 18:43:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:48.486 18:43:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:48.486 18:43:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:48.486 18:43:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:48.486 18:43:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:48.486 18:43:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.486 18:43:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.486 18:43:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.486 18:43:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:48.486 18:43:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.486 18:43:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.486 [ 00:12:48.486 { 00:12:48.486 "name": "BaseBdev2", 00:12:48.486 "aliases": [ 00:12:48.486 "d7bddddb-224a-4e25-aa9e-d0c40e30c9ae" 00:12:48.486 ], 00:12:48.486 "product_name": "Malloc disk", 00:12:48.486 "block_size": 512, 00:12:48.486 "num_blocks": 65536, 00:12:48.486 "uuid": "d7bddddb-224a-4e25-aa9e-d0c40e30c9ae", 00:12:48.486 "assigned_rate_limits": { 00:12:48.486 "rw_ios_per_sec": 0, 00:12:48.486 "rw_mbytes_per_sec": 0, 00:12:48.486 "r_mbytes_per_sec": 0, 00:12:48.486 "w_mbytes_per_sec": 0 00:12:48.486 }, 00:12:48.486 "claimed": true, 00:12:48.486 "claim_type": "exclusive_write", 00:12:48.486 "zoned": false, 00:12:48.486 "supported_io_types": { 00:12:48.486 "read": true, 00:12:48.486 "write": true, 00:12:48.486 "unmap": true, 00:12:48.486 "flush": true, 00:12:48.486 "reset": true, 00:12:48.486 "nvme_admin": false, 00:12:48.486 "nvme_io": false, 00:12:48.486 "nvme_io_md": false, 00:12:48.486 "write_zeroes": true, 00:12:48.486 "zcopy": true, 00:12:48.486 "get_zone_info": false, 00:12:48.486 "zone_management": false, 00:12:48.486 "zone_append": false, 00:12:48.486 "compare": false, 00:12:48.486 "compare_and_write": false, 00:12:48.486 "abort": true, 00:12:48.486 "seek_hole": false, 00:12:48.486 "seek_data": false, 00:12:48.486 "copy": true, 00:12:48.486 "nvme_iov_md": false 00:12:48.486 }, 00:12:48.486 "memory_domains": [ 00:12:48.486 { 00:12:48.486 "dma_device_id": "system", 00:12:48.487 "dma_device_type": 1 00:12:48.487 }, 00:12:48.487 { 00:12:48.487 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:48.487 "dma_device_type": 2 00:12:48.487 } 00:12:48.487 ], 00:12:48.487 "driver_specific": {} 00:12:48.487 } 00:12:48.487 ] 00:12:48.487 18:43:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.487 18:43:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:48.487 18:43:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:48.487 18:43:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:48.487 18:43:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:12:48.487 18:43:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:48.487 18:43:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:48.487 18:43:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:48.487 18:43:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:48.487 18:43:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:48.487 18:43:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:48.487 18:43:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:48.487 18:43:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:48.487 18:43:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:48.487 18:43:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.487 18:43:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.487 18:43:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.487 18:43:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:48.487 18:43:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.487 18:43:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:48.487 "name": "Existed_Raid", 00:12:48.487 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:48.487 "strip_size_kb": 64, 00:12:48.487 "state": "configuring", 00:12:48.487 "raid_level": "raid5f", 00:12:48.487 "superblock": false, 00:12:48.487 "num_base_bdevs": 3, 00:12:48.487 "num_base_bdevs_discovered": 2, 00:12:48.487 "num_base_bdevs_operational": 3, 00:12:48.487 "base_bdevs_list": [ 00:12:48.487 { 00:12:48.487 "name": "BaseBdev1", 00:12:48.487 "uuid": "aa804df4-c7bb-4405-afd8-b53d34d59f84", 00:12:48.487 "is_configured": true, 00:12:48.487 "data_offset": 0, 00:12:48.487 "data_size": 65536 00:12:48.487 }, 00:12:48.487 { 00:12:48.487 "name": "BaseBdev2", 00:12:48.487 "uuid": "d7bddddb-224a-4e25-aa9e-d0c40e30c9ae", 00:12:48.487 "is_configured": true, 00:12:48.487 "data_offset": 0, 00:12:48.487 "data_size": 65536 00:12:48.487 }, 00:12:48.487 { 00:12:48.487 "name": "BaseBdev3", 00:12:48.487 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:48.487 "is_configured": false, 00:12:48.487 "data_offset": 0, 00:12:48.487 "data_size": 0 00:12:48.487 } 00:12:48.487 ] 00:12:48.487 }' 00:12:48.487 18:43:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:48.487 18:43:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.747 18:43:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:48.747 18:43:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.747 18:43:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.747 [2024-12-15 18:43:49.102256] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:48.747 [2024-12-15 18:43:49.102611] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:12:48.747 [2024-12-15 18:43:49.102847] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:12:48.747 [2024-12-15 18:43:49.103980] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:12:48.747 [2024-12-15 18:43:49.105713] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:12:48.747 [2024-12-15 18:43:49.105954] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:12:48.747 BaseBdev3 00:12:48.747 [2024-12-15 18:43:49.106887] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:48.747 18:43:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.747 18:43:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:48.747 18:43:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:48.747 18:43:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:48.747 18:43:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:48.747 18:43:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:48.747 18:43:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:48.747 18:43:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:48.747 18:43:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.747 18:43:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.747 18:43:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.747 18:43:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:48.747 18:43:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.747 18:43:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.747 [ 00:12:48.747 { 00:12:48.747 "name": "BaseBdev3", 00:12:48.747 "aliases": [ 00:12:48.747 "237af4ef-69bc-425e-b8d9-0879eca801d2" 00:12:48.747 ], 00:12:48.747 "product_name": "Malloc disk", 00:12:48.747 "block_size": 512, 00:12:48.747 "num_blocks": 65536, 00:12:48.747 "uuid": "237af4ef-69bc-425e-b8d9-0879eca801d2", 00:12:48.747 "assigned_rate_limits": { 00:12:48.747 "rw_ios_per_sec": 0, 00:12:48.747 "rw_mbytes_per_sec": 0, 00:12:48.747 "r_mbytes_per_sec": 0, 00:12:48.747 "w_mbytes_per_sec": 0 00:12:48.747 }, 00:12:48.747 "claimed": true, 00:12:48.747 "claim_type": "exclusive_write", 00:12:48.747 "zoned": false, 00:12:48.747 "supported_io_types": { 00:12:48.747 "read": true, 00:12:48.747 "write": true, 00:12:48.747 "unmap": true, 00:12:48.747 "flush": true, 00:12:48.747 "reset": true, 00:12:48.747 "nvme_admin": false, 00:12:48.747 "nvme_io": false, 00:12:48.747 "nvme_io_md": false, 00:12:48.747 "write_zeroes": true, 00:12:48.747 "zcopy": true, 00:12:48.747 "get_zone_info": false, 00:12:48.747 "zone_management": false, 00:12:48.747 "zone_append": false, 00:12:48.747 "compare": false, 00:12:48.747 "compare_and_write": false, 00:12:48.747 "abort": true, 00:12:48.747 "seek_hole": false, 00:12:48.747 "seek_data": false, 00:12:48.747 "copy": true, 00:12:48.747 "nvme_iov_md": false 00:12:48.747 }, 00:12:48.747 "memory_domains": [ 00:12:48.747 { 00:12:48.747 "dma_device_id": "system", 00:12:48.747 "dma_device_type": 1 00:12:48.747 }, 00:12:48.747 { 00:12:48.747 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:48.747 "dma_device_type": 2 00:12:48.747 } 00:12:48.747 ], 00:12:48.747 "driver_specific": {} 00:12:48.747 } 00:12:48.747 ] 00:12:48.747 18:43:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.747 18:43:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:48.747 18:43:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:48.747 18:43:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:48.747 18:43:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:12:48.747 18:43:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:48.747 18:43:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:48.747 18:43:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:48.747 18:43:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:48.747 18:43:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:48.747 18:43:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:48.747 18:43:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:48.747 18:43:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:48.747 18:43:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:48.747 18:43:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.747 18:43:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:48.747 18:43:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.747 18:43:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.747 18:43:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.007 18:43:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:49.007 "name": "Existed_Raid", 00:12:49.007 "uuid": "0290312a-ffb3-4ba0-ab60-397645317716", 00:12:49.007 "strip_size_kb": 64, 00:12:49.007 "state": "online", 00:12:49.007 "raid_level": "raid5f", 00:12:49.007 "superblock": false, 00:12:49.007 "num_base_bdevs": 3, 00:12:49.007 "num_base_bdevs_discovered": 3, 00:12:49.007 "num_base_bdevs_operational": 3, 00:12:49.007 "base_bdevs_list": [ 00:12:49.007 { 00:12:49.007 "name": "BaseBdev1", 00:12:49.007 "uuid": "aa804df4-c7bb-4405-afd8-b53d34d59f84", 00:12:49.007 "is_configured": true, 00:12:49.007 "data_offset": 0, 00:12:49.007 "data_size": 65536 00:12:49.007 }, 00:12:49.007 { 00:12:49.007 "name": "BaseBdev2", 00:12:49.007 "uuid": "d7bddddb-224a-4e25-aa9e-d0c40e30c9ae", 00:12:49.007 "is_configured": true, 00:12:49.007 "data_offset": 0, 00:12:49.007 "data_size": 65536 00:12:49.007 }, 00:12:49.007 { 00:12:49.007 "name": "BaseBdev3", 00:12:49.007 "uuid": "237af4ef-69bc-425e-b8d9-0879eca801d2", 00:12:49.007 "is_configured": true, 00:12:49.007 "data_offset": 0, 00:12:49.007 "data_size": 65536 00:12:49.007 } 00:12:49.007 ] 00:12:49.007 }' 00:12:49.007 18:43:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:49.007 18:43:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.267 18:43:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:49.267 18:43:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:49.267 18:43:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:49.267 18:43:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:49.267 18:43:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:49.267 18:43:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:49.267 18:43:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:49.267 18:43:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:49.267 18:43:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.267 18:43:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.267 [2024-12-15 18:43:49.589793] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:49.267 18:43:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.267 18:43:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:49.267 "name": "Existed_Raid", 00:12:49.267 "aliases": [ 00:12:49.267 "0290312a-ffb3-4ba0-ab60-397645317716" 00:12:49.267 ], 00:12:49.267 "product_name": "Raid Volume", 00:12:49.267 "block_size": 512, 00:12:49.267 "num_blocks": 131072, 00:12:49.267 "uuid": "0290312a-ffb3-4ba0-ab60-397645317716", 00:12:49.267 "assigned_rate_limits": { 00:12:49.267 "rw_ios_per_sec": 0, 00:12:49.267 "rw_mbytes_per_sec": 0, 00:12:49.267 "r_mbytes_per_sec": 0, 00:12:49.267 "w_mbytes_per_sec": 0 00:12:49.267 }, 00:12:49.267 "claimed": false, 00:12:49.267 "zoned": false, 00:12:49.267 "supported_io_types": { 00:12:49.267 "read": true, 00:12:49.267 "write": true, 00:12:49.267 "unmap": false, 00:12:49.267 "flush": false, 00:12:49.267 "reset": true, 00:12:49.267 "nvme_admin": false, 00:12:49.267 "nvme_io": false, 00:12:49.267 "nvme_io_md": false, 00:12:49.267 "write_zeroes": true, 00:12:49.267 "zcopy": false, 00:12:49.267 "get_zone_info": false, 00:12:49.267 "zone_management": false, 00:12:49.267 "zone_append": false, 00:12:49.267 "compare": false, 00:12:49.267 "compare_and_write": false, 00:12:49.267 "abort": false, 00:12:49.267 "seek_hole": false, 00:12:49.267 "seek_data": false, 00:12:49.267 "copy": false, 00:12:49.267 "nvme_iov_md": false 00:12:49.267 }, 00:12:49.267 "driver_specific": { 00:12:49.267 "raid": { 00:12:49.267 "uuid": "0290312a-ffb3-4ba0-ab60-397645317716", 00:12:49.267 "strip_size_kb": 64, 00:12:49.267 "state": "online", 00:12:49.267 "raid_level": "raid5f", 00:12:49.267 "superblock": false, 00:12:49.267 "num_base_bdevs": 3, 00:12:49.267 "num_base_bdevs_discovered": 3, 00:12:49.267 "num_base_bdevs_operational": 3, 00:12:49.267 "base_bdevs_list": [ 00:12:49.267 { 00:12:49.267 "name": "BaseBdev1", 00:12:49.267 "uuid": "aa804df4-c7bb-4405-afd8-b53d34d59f84", 00:12:49.267 "is_configured": true, 00:12:49.267 "data_offset": 0, 00:12:49.267 "data_size": 65536 00:12:49.267 }, 00:12:49.267 { 00:12:49.267 "name": "BaseBdev2", 00:12:49.267 "uuid": "d7bddddb-224a-4e25-aa9e-d0c40e30c9ae", 00:12:49.267 "is_configured": true, 00:12:49.267 "data_offset": 0, 00:12:49.267 "data_size": 65536 00:12:49.267 }, 00:12:49.267 { 00:12:49.267 "name": "BaseBdev3", 00:12:49.267 "uuid": "237af4ef-69bc-425e-b8d9-0879eca801d2", 00:12:49.267 "is_configured": true, 00:12:49.267 "data_offset": 0, 00:12:49.267 "data_size": 65536 00:12:49.267 } 00:12:49.267 ] 00:12:49.267 } 00:12:49.267 } 00:12:49.267 }' 00:12:49.267 18:43:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:49.267 18:43:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:49.267 BaseBdev2 00:12:49.267 BaseBdev3' 00:12:49.267 18:43:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:49.527 18:43:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:49.527 18:43:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:49.527 18:43:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:49.527 18:43:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:49.527 18:43:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.527 18:43:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.527 18:43:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.527 18:43:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:49.527 18:43:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:49.528 18:43:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:49.528 18:43:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:49.528 18:43:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:49.528 18:43:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.528 18:43:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.528 18:43:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.528 18:43:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:49.528 18:43:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:49.528 18:43:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:49.528 18:43:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:49.528 18:43:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:49.528 18:43:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.528 18:43:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.528 18:43:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.528 18:43:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:49.528 18:43:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:49.528 18:43:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:49.528 18:43:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.528 18:43:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.528 [2024-12-15 18:43:49.865157] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:49.528 18:43:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.528 18:43:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:49.528 18:43:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:12:49.528 18:43:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:49.528 18:43:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:49.528 18:43:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:12:49.528 18:43:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:12:49.528 18:43:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:49.528 18:43:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:49.528 18:43:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:49.528 18:43:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:49.528 18:43:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:49.528 18:43:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:49.528 18:43:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:49.528 18:43:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:49.528 18:43:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:49.528 18:43:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.528 18:43:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:49.528 18:43:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.528 18:43:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.528 18:43:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.528 18:43:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:49.528 "name": "Existed_Raid", 00:12:49.528 "uuid": "0290312a-ffb3-4ba0-ab60-397645317716", 00:12:49.528 "strip_size_kb": 64, 00:12:49.528 "state": "online", 00:12:49.528 "raid_level": "raid5f", 00:12:49.528 "superblock": false, 00:12:49.528 "num_base_bdevs": 3, 00:12:49.528 "num_base_bdevs_discovered": 2, 00:12:49.528 "num_base_bdevs_operational": 2, 00:12:49.528 "base_bdevs_list": [ 00:12:49.528 { 00:12:49.528 "name": null, 00:12:49.528 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:49.528 "is_configured": false, 00:12:49.528 "data_offset": 0, 00:12:49.528 "data_size": 65536 00:12:49.528 }, 00:12:49.528 { 00:12:49.528 "name": "BaseBdev2", 00:12:49.528 "uuid": "d7bddddb-224a-4e25-aa9e-d0c40e30c9ae", 00:12:49.528 "is_configured": true, 00:12:49.528 "data_offset": 0, 00:12:49.528 "data_size": 65536 00:12:49.528 }, 00:12:49.528 { 00:12:49.528 "name": "BaseBdev3", 00:12:49.528 "uuid": "237af4ef-69bc-425e-b8d9-0879eca801d2", 00:12:49.528 "is_configured": true, 00:12:49.528 "data_offset": 0, 00:12:49.528 "data_size": 65536 00:12:49.528 } 00:12:49.528 ] 00:12:49.528 }' 00:12:49.528 18:43:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:49.528 18:43:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.098 18:43:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:50.098 18:43:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:50.098 18:43:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:50.098 18:43:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.098 18:43:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.098 18:43:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.098 18:43:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.098 18:43:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:50.098 18:43:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:50.098 18:43:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:50.098 18:43:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.098 18:43:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.098 [2024-12-15 18:43:50.323845] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:50.098 [2024-12-15 18:43:50.323936] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:50.098 [2024-12-15 18:43:50.335240] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:50.098 18:43:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.098 18:43:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:50.098 18:43:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:50.098 18:43:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.098 18:43:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:50.098 18:43:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.098 18:43:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.098 18:43:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.098 18:43:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:50.098 18:43:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:50.098 18:43:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:50.098 18:43:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.098 18:43:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.098 [2024-12-15 18:43:50.395145] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:50.098 [2024-12-15 18:43:50.395230] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:12:50.098 18:43:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.098 18:43:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:50.098 18:43:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:50.098 18:43:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.098 18:43:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:50.098 18:43:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.098 18:43:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.098 18:43:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.098 18:43:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:50.098 18:43:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:50.098 18:43:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:12:50.098 18:43:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:50.098 18:43:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:50.098 18:43:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:50.098 18:43:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.098 18:43:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.098 BaseBdev2 00:12:50.098 18:43:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.098 18:43:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:50.098 18:43:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:50.098 18:43:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:50.098 18:43:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:50.098 18:43:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:50.098 18:43:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:50.098 18:43:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:50.098 18:43:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.098 18:43:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.098 18:43:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.098 18:43:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:50.098 18:43:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.098 18:43:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.098 [ 00:12:50.098 { 00:12:50.098 "name": "BaseBdev2", 00:12:50.098 "aliases": [ 00:12:50.098 "cc51ff70-eda9-46e0-9c7f-3bcee85b2495" 00:12:50.098 ], 00:12:50.098 "product_name": "Malloc disk", 00:12:50.098 "block_size": 512, 00:12:50.098 "num_blocks": 65536, 00:12:50.098 "uuid": "cc51ff70-eda9-46e0-9c7f-3bcee85b2495", 00:12:50.098 "assigned_rate_limits": { 00:12:50.098 "rw_ios_per_sec": 0, 00:12:50.098 "rw_mbytes_per_sec": 0, 00:12:50.098 "r_mbytes_per_sec": 0, 00:12:50.098 "w_mbytes_per_sec": 0 00:12:50.098 }, 00:12:50.098 "claimed": false, 00:12:50.098 "zoned": false, 00:12:50.098 "supported_io_types": { 00:12:50.098 "read": true, 00:12:50.098 "write": true, 00:12:50.098 "unmap": true, 00:12:50.098 "flush": true, 00:12:50.098 "reset": true, 00:12:50.098 "nvme_admin": false, 00:12:50.098 "nvme_io": false, 00:12:50.098 "nvme_io_md": false, 00:12:50.098 "write_zeroes": true, 00:12:50.098 "zcopy": true, 00:12:50.098 "get_zone_info": false, 00:12:50.098 "zone_management": false, 00:12:50.098 "zone_append": false, 00:12:50.098 "compare": false, 00:12:50.098 "compare_and_write": false, 00:12:50.098 "abort": true, 00:12:50.098 "seek_hole": false, 00:12:50.098 "seek_data": false, 00:12:50.098 "copy": true, 00:12:50.098 "nvme_iov_md": false 00:12:50.098 }, 00:12:50.098 "memory_domains": [ 00:12:50.098 { 00:12:50.098 "dma_device_id": "system", 00:12:50.098 "dma_device_type": 1 00:12:50.098 }, 00:12:50.098 { 00:12:50.098 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:50.098 "dma_device_type": 2 00:12:50.098 } 00:12:50.098 ], 00:12:50.098 "driver_specific": {} 00:12:50.098 } 00:12:50.098 ] 00:12:50.098 18:43:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.098 18:43:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:50.098 18:43:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:50.098 18:43:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:50.098 18:43:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:50.098 18:43:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.098 18:43:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.098 BaseBdev3 00:12:50.098 18:43:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.098 18:43:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:50.098 18:43:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:50.098 18:43:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:50.098 18:43:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:50.098 18:43:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:50.098 18:43:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:50.099 18:43:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:50.099 18:43:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.099 18:43:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.099 18:43:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.099 18:43:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:50.099 18:43:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.099 18:43:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.358 [ 00:12:50.358 { 00:12:50.358 "name": "BaseBdev3", 00:12:50.358 "aliases": [ 00:12:50.358 "048c53c1-d737-4809-bd40-970e54d79352" 00:12:50.358 ], 00:12:50.358 "product_name": "Malloc disk", 00:12:50.358 "block_size": 512, 00:12:50.358 "num_blocks": 65536, 00:12:50.358 "uuid": "048c53c1-d737-4809-bd40-970e54d79352", 00:12:50.358 "assigned_rate_limits": { 00:12:50.358 "rw_ios_per_sec": 0, 00:12:50.358 "rw_mbytes_per_sec": 0, 00:12:50.358 "r_mbytes_per_sec": 0, 00:12:50.358 "w_mbytes_per_sec": 0 00:12:50.358 }, 00:12:50.358 "claimed": false, 00:12:50.358 "zoned": false, 00:12:50.358 "supported_io_types": { 00:12:50.358 "read": true, 00:12:50.358 "write": true, 00:12:50.358 "unmap": true, 00:12:50.358 "flush": true, 00:12:50.358 "reset": true, 00:12:50.358 "nvme_admin": false, 00:12:50.358 "nvme_io": false, 00:12:50.358 "nvme_io_md": false, 00:12:50.358 "write_zeroes": true, 00:12:50.358 "zcopy": true, 00:12:50.358 "get_zone_info": false, 00:12:50.358 "zone_management": false, 00:12:50.358 "zone_append": false, 00:12:50.358 "compare": false, 00:12:50.358 "compare_and_write": false, 00:12:50.358 "abort": true, 00:12:50.358 "seek_hole": false, 00:12:50.358 "seek_data": false, 00:12:50.358 "copy": true, 00:12:50.358 "nvme_iov_md": false 00:12:50.358 }, 00:12:50.358 "memory_domains": [ 00:12:50.358 { 00:12:50.358 "dma_device_id": "system", 00:12:50.358 "dma_device_type": 1 00:12:50.358 }, 00:12:50.358 { 00:12:50.358 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:50.358 "dma_device_type": 2 00:12:50.358 } 00:12:50.358 ], 00:12:50.358 "driver_specific": {} 00:12:50.358 } 00:12:50.358 ] 00:12:50.358 18:43:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.358 18:43:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:50.358 18:43:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:50.358 18:43:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:50.358 18:43:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:50.359 18:43:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.359 18:43:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.359 [2024-12-15 18:43:50.558658] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:50.359 [2024-12-15 18:43:50.558738] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:50.359 [2024-12-15 18:43:50.558779] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:50.359 [2024-12-15 18:43:50.560473] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:50.359 18:43:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.359 18:43:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:12:50.359 18:43:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:50.359 18:43:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:50.359 18:43:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:50.359 18:43:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:50.359 18:43:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:50.359 18:43:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:50.359 18:43:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:50.359 18:43:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:50.359 18:43:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:50.359 18:43:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.359 18:43:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.359 18:43:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.359 18:43:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:50.359 18:43:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.359 18:43:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:50.359 "name": "Existed_Raid", 00:12:50.359 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:50.359 "strip_size_kb": 64, 00:12:50.359 "state": "configuring", 00:12:50.359 "raid_level": "raid5f", 00:12:50.359 "superblock": false, 00:12:50.359 "num_base_bdevs": 3, 00:12:50.359 "num_base_bdevs_discovered": 2, 00:12:50.359 "num_base_bdevs_operational": 3, 00:12:50.359 "base_bdevs_list": [ 00:12:50.359 { 00:12:50.359 "name": "BaseBdev1", 00:12:50.359 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:50.359 "is_configured": false, 00:12:50.359 "data_offset": 0, 00:12:50.359 "data_size": 0 00:12:50.359 }, 00:12:50.359 { 00:12:50.359 "name": "BaseBdev2", 00:12:50.359 "uuid": "cc51ff70-eda9-46e0-9c7f-3bcee85b2495", 00:12:50.359 "is_configured": true, 00:12:50.359 "data_offset": 0, 00:12:50.359 "data_size": 65536 00:12:50.359 }, 00:12:50.359 { 00:12:50.359 "name": "BaseBdev3", 00:12:50.359 "uuid": "048c53c1-d737-4809-bd40-970e54d79352", 00:12:50.359 "is_configured": true, 00:12:50.359 "data_offset": 0, 00:12:50.359 "data_size": 65536 00:12:50.359 } 00:12:50.359 ] 00:12:50.359 }' 00:12:50.359 18:43:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:50.359 18:43:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.619 18:43:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:50.619 18:43:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.619 18:43:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.619 [2024-12-15 18:43:51.017866] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:50.619 18:43:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.619 18:43:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:12:50.619 18:43:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:50.619 18:43:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:50.619 18:43:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:50.619 18:43:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:50.619 18:43:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:50.619 18:43:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:50.619 18:43:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:50.619 18:43:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:50.619 18:43:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:50.619 18:43:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:50.619 18:43:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.619 18:43:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.619 18:43:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.619 18:43:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.619 18:43:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:50.619 "name": "Existed_Raid", 00:12:50.619 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:50.619 "strip_size_kb": 64, 00:12:50.619 "state": "configuring", 00:12:50.619 "raid_level": "raid5f", 00:12:50.619 "superblock": false, 00:12:50.619 "num_base_bdevs": 3, 00:12:50.619 "num_base_bdevs_discovered": 1, 00:12:50.619 "num_base_bdevs_operational": 3, 00:12:50.619 "base_bdevs_list": [ 00:12:50.619 { 00:12:50.619 "name": "BaseBdev1", 00:12:50.619 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:50.619 "is_configured": false, 00:12:50.619 "data_offset": 0, 00:12:50.619 "data_size": 0 00:12:50.619 }, 00:12:50.619 { 00:12:50.619 "name": null, 00:12:50.619 "uuid": "cc51ff70-eda9-46e0-9c7f-3bcee85b2495", 00:12:50.619 "is_configured": false, 00:12:50.619 "data_offset": 0, 00:12:50.619 "data_size": 65536 00:12:50.619 }, 00:12:50.619 { 00:12:50.619 "name": "BaseBdev3", 00:12:50.619 "uuid": "048c53c1-d737-4809-bd40-970e54d79352", 00:12:50.619 "is_configured": true, 00:12:50.619 "data_offset": 0, 00:12:50.619 "data_size": 65536 00:12:50.619 } 00:12:50.619 ] 00:12:50.619 }' 00:12:50.619 18:43:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:50.619 18:43:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.189 18:43:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:51.189 18:43:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.189 18:43:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.189 18:43:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:51.189 18:43:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.189 18:43:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:51.189 18:43:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:51.189 18:43:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.189 18:43:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.189 [2024-12-15 18:43:51.452063] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:51.189 BaseBdev1 00:12:51.189 18:43:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.189 18:43:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:51.189 18:43:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:51.189 18:43:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:51.189 18:43:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:51.189 18:43:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:51.189 18:43:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:51.189 18:43:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:51.189 18:43:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.189 18:43:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.189 18:43:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.189 18:43:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:51.189 18:43:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.189 18:43:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.189 [ 00:12:51.189 { 00:12:51.189 "name": "BaseBdev1", 00:12:51.189 "aliases": [ 00:12:51.189 "59da53f7-ab10-40f4-b95c-46b33d4d50c6" 00:12:51.189 ], 00:12:51.189 "product_name": "Malloc disk", 00:12:51.189 "block_size": 512, 00:12:51.189 "num_blocks": 65536, 00:12:51.189 "uuid": "59da53f7-ab10-40f4-b95c-46b33d4d50c6", 00:12:51.189 "assigned_rate_limits": { 00:12:51.189 "rw_ios_per_sec": 0, 00:12:51.189 "rw_mbytes_per_sec": 0, 00:12:51.189 "r_mbytes_per_sec": 0, 00:12:51.189 "w_mbytes_per_sec": 0 00:12:51.189 }, 00:12:51.189 "claimed": true, 00:12:51.189 "claim_type": "exclusive_write", 00:12:51.189 "zoned": false, 00:12:51.189 "supported_io_types": { 00:12:51.189 "read": true, 00:12:51.189 "write": true, 00:12:51.189 "unmap": true, 00:12:51.189 "flush": true, 00:12:51.189 "reset": true, 00:12:51.189 "nvme_admin": false, 00:12:51.189 "nvme_io": false, 00:12:51.189 "nvme_io_md": false, 00:12:51.189 "write_zeroes": true, 00:12:51.189 "zcopy": true, 00:12:51.189 "get_zone_info": false, 00:12:51.189 "zone_management": false, 00:12:51.189 "zone_append": false, 00:12:51.189 "compare": false, 00:12:51.189 "compare_and_write": false, 00:12:51.189 "abort": true, 00:12:51.189 "seek_hole": false, 00:12:51.189 "seek_data": false, 00:12:51.189 "copy": true, 00:12:51.189 "nvme_iov_md": false 00:12:51.189 }, 00:12:51.189 "memory_domains": [ 00:12:51.189 { 00:12:51.189 "dma_device_id": "system", 00:12:51.189 "dma_device_type": 1 00:12:51.189 }, 00:12:51.189 { 00:12:51.189 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:51.189 "dma_device_type": 2 00:12:51.189 } 00:12:51.189 ], 00:12:51.189 "driver_specific": {} 00:12:51.189 } 00:12:51.189 ] 00:12:51.189 18:43:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.189 18:43:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:51.189 18:43:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:12:51.189 18:43:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:51.189 18:43:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:51.189 18:43:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:51.189 18:43:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:51.189 18:43:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:51.189 18:43:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:51.189 18:43:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:51.189 18:43:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:51.189 18:43:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:51.189 18:43:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:51.189 18:43:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:51.189 18:43:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.189 18:43:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.189 18:43:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.189 18:43:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:51.189 "name": "Existed_Raid", 00:12:51.189 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:51.189 "strip_size_kb": 64, 00:12:51.189 "state": "configuring", 00:12:51.189 "raid_level": "raid5f", 00:12:51.189 "superblock": false, 00:12:51.189 "num_base_bdevs": 3, 00:12:51.189 "num_base_bdevs_discovered": 2, 00:12:51.189 "num_base_bdevs_operational": 3, 00:12:51.189 "base_bdevs_list": [ 00:12:51.189 { 00:12:51.189 "name": "BaseBdev1", 00:12:51.189 "uuid": "59da53f7-ab10-40f4-b95c-46b33d4d50c6", 00:12:51.189 "is_configured": true, 00:12:51.189 "data_offset": 0, 00:12:51.189 "data_size": 65536 00:12:51.189 }, 00:12:51.189 { 00:12:51.189 "name": null, 00:12:51.189 "uuid": "cc51ff70-eda9-46e0-9c7f-3bcee85b2495", 00:12:51.189 "is_configured": false, 00:12:51.189 "data_offset": 0, 00:12:51.189 "data_size": 65536 00:12:51.189 }, 00:12:51.189 { 00:12:51.189 "name": "BaseBdev3", 00:12:51.189 "uuid": "048c53c1-d737-4809-bd40-970e54d79352", 00:12:51.189 "is_configured": true, 00:12:51.189 "data_offset": 0, 00:12:51.189 "data_size": 65536 00:12:51.189 } 00:12:51.189 ] 00:12:51.189 }' 00:12:51.189 18:43:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:51.189 18:43:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.759 18:43:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:51.759 18:43:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:51.759 18:43:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.759 18:43:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.759 18:43:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.760 18:43:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:51.760 18:43:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:51.760 18:43:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.760 18:43:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.760 [2024-12-15 18:43:51.967264] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:51.760 18:43:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.760 18:43:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:12:51.760 18:43:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:51.760 18:43:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:51.760 18:43:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:51.760 18:43:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:51.760 18:43:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:51.760 18:43:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:51.760 18:43:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:51.760 18:43:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:51.760 18:43:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:51.760 18:43:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:51.760 18:43:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:51.760 18:43:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.760 18:43:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.760 18:43:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.760 18:43:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:51.760 "name": "Existed_Raid", 00:12:51.760 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:51.760 "strip_size_kb": 64, 00:12:51.760 "state": "configuring", 00:12:51.760 "raid_level": "raid5f", 00:12:51.760 "superblock": false, 00:12:51.760 "num_base_bdevs": 3, 00:12:51.760 "num_base_bdevs_discovered": 1, 00:12:51.760 "num_base_bdevs_operational": 3, 00:12:51.760 "base_bdevs_list": [ 00:12:51.760 { 00:12:51.760 "name": "BaseBdev1", 00:12:51.760 "uuid": "59da53f7-ab10-40f4-b95c-46b33d4d50c6", 00:12:51.760 "is_configured": true, 00:12:51.760 "data_offset": 0, 00:12:51.760 "data_size": 65536 00:12:51.760 }, 00:12:51.760 { 00:12:51.760 "name": null, 00:12:51.760 "uuid": "cc51ff70-eda9-46e0-9c7f-3bcee85b2495", 00:12:51.760 "is_configured": false, 00:12:51.760 "data_offset": 0, 00:12:51.760 "data_size": 65536 00:12:51.760 }, 00:12:51.760 { 00:12:51.760 "name": null, 00:12:51.760 "uuid": "048c53c1-d737-4809-bd40-970e54d79352", 00:12:51.760 "is_configured": false, 00:12:51.760 "data_offset": 0, 00:12:51.760 "data_size": 65536 00:12:51.760 } 00:12:51.760 ] 00:12:51.760 }' 00:12:51.760 18:43:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:51.760 18:43:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.020 18:43:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.020 18:43:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:52.020 18:43:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.020 18:43:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.020 18:43:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.020 18:43:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:52.020 18:43:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:52.020 18:43:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.020 18:43:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.020 [2024-12-15 18:43:52.454443] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:52.020 18:43:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.020 18:43:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:12:52.020 18:43:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:52.280 18:43:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:52.280 18:43:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:52.280 18:43:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:52.280 18:43:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:52.280 18:43:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:52.280 18:43:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:52.280 18:43:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:52.280 18:43:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:52.280 18:43:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:52.280 18:43:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.280 18:43:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.280 18:43:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.280 18:43:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.280 18:43:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:52.280 "name": "Existed_Raid", 00:12:52.280 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:52.280 "strip_size_kb": 64, 00:12:52.280 "state": "configuring", 00:12:52.280 "raid_level": "raid5f", 00:12:52.280 "superblock": false, 00:12:52.280 "num_base_bdevs": 3, 00:12:52.280 "num_base_bdevs_discovered": 2, 00:12:52.280 "num_base_bdevs_operational": 3, 00:12:52.280 "base_bdevs_list": [ 00:12:52.280 { 00:12:52.280 "name": "BaseBdev1", 00:12:52.280 "uuid": "59da53f7-ab10-40f4-b95c-46b33d4d50c6", 00:12:52.280 "is_configured": true, 00:12:52.280 "data_offset": 0, 00:12:52.280 "data_size": 65536 00:12:52.280 }, 00:12:52.280 { 00:12:52.280 "name": null, 00:12:52.280 "uuid": "cc51ff70-eda9-46e0-9c7f-3bcee85b2495", 00:12:52.280 "is_configured": false, 00:12:52.280 "data_offset": 0, 00:12:52.280 "data_size": 65536 00:12:52.280 }, 00:12:52.280 { 00:12:52.280 "name": "BaseBdev3", 00:12:52.280 "uuid": "048c53c1-d737-4809-bd40-970e54d79352", 00:12:52.280 "is_configured": true, 00:12:52.280 "data_offset": 0, 00:12:52.280 "data_size": 65536 00:12:52.280 } 00:12:52.280 ] 00:12:52.280 }' 00:12:52.280 18:43:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:52.280 18:43:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.540 18:43:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:52.540 18:43:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.540 18:43:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.540 18:43:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.540 18:43:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.540 18:43:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:52.540 18:43:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:52.540 18:43:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.540 18:43:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.540 [2024-12-15 18:43:52.905743] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:52.540 18:43:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.540 18:43:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:12:52.540 18:43:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:52.540 18:43:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:52.540 18:43:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:52.540 18:43:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:52.540 18:43:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:52.540 18:43:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:52.540 18:43:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:52.540 18:43:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:52.540 18:43:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:52.540 18:43:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.540 18:43:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:52.540 18:43:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.540 18:43:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.540 18:43:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.540 18:43:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:52.540 "name": "Existed_Raid", 00:12:52.540 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:52.540 "strip_size_kb": 64, 00:12:52.540 "state": "configuring", 00:12:52.540 "raid_level": "raid5f", 00:12:52.540 "superblock": false, 00:12:52.540 "num_base_bdevs": 3, 00:12:52.540 "num_base_bdevs_discovered": 1, 00:12:52.540 "num_base_bdevs_operational": 3, 00:12:52.540 "base_bdevs_list": [ 00:12:52.540 { 00:12:52.540 "name": null, 00:12:52.540 "uuid": "59da53f7-ab10-40f4-b95c-46b33d4d50c6", 00:12:52.540 "is_configured": false, 00:12:52.540 "data_offset": 0, 00:12:52.540 "data_size": 65536 00:12:52.540 }, 00:12:52.540 { 00:12:52.540 "name": null, 00:12:52.540 "uuid": "cc51ff70-eda9-46e0-9c7f-3bcee85b2495", 00:12:52.540 "is_configured": false, 00:12:52.540 "data_offset": 0, 00:12:52.540 "data_size": 65536 00:12:52.541 }, 00:12:52.541 { 00:12:52.541 "name": "BaseBdev3", 00:12:52.541 "uuid": "048c53c1-d737-4809-bd40-970e54d79352", 00:12:52.541 "is_configured": true, 00:12:52.541 "data_offset": 0, 00:12:52.541 "data_size": 65536 00:12:52.541 } 00:12:52.541 ] 00:12:52.541 }' 00:12:52.541 18:43:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:52.541 18:43:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.110 18:43:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:53.110 18:43:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:53.110 18:43:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.111 18:43:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.111 18:43:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.111 18:43:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:53.111 18:43:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:53.111 18:43:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.111 18:43:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.111 [2024-12-15 18:43:53.411464] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:53.111 18:43:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.111 18:43:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:12:53.111 18:43:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:53.111 18:43:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:53.111 18:43:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:53.111 18:43:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:53.111 18:43:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:53.111 18:43:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:53.111 18:43:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:53.111 18:43:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:53.111 18:43:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:53.111 18:43:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:53.111 18:43:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:53.111 18:43:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.111 18:43:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.111 18:43:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.111 18:43:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:53.111 "name": "Existed_Raid", 00:12:53.111 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:53.111 "strip_size_kb": 64, 00:12:53.111 "state": "configuring", 00:12:53.111 "raid_level": "raid5f", 00:12:53.111 "superblock": false, 00:12:53.111 "num_base_bdevs": 3, 00:12:53.111 "num_base_bdevs_discovered": 2, 00:12:53.111 "num_base_bdevs_operational": 3, 00:12:53.111 "base_bdevs_list": [ 00:12:53.111 { 00:12:53.111 "name": null, 00:12:53.111 "uuid": "59da53f7-ab10-40f4-b95c-46b33d4d50c6", 00:12:53.111 "is_configured": false, 00:12:53.111 "data_offset": 0, 00:12:53.111 "data_size": 65536 00:12:53.111 }, 00:12:53.111 { 00:12:53.111 "name": "BaseBdev2", 00:12:53.111 "uuid": "cc51ff70-eda9-46e0-9c7f-3bcee85b2495", 00:12:53.111 "is_configured": true, 00:12:53.111 "data_offset": 0, 00:12:53.111 "data_size": 65536 00:12:53.111 }, 00:12:53.111 { 00:12:53.111 "name": "BaseBdev3", 00:12:53.111 "uuid": "048c53c1-d737-4809-bd40-970e54d79352", 00:12:53.111 "is_configured": true, 00:12:53.111 "data_offset": 0, 00:12:53.111 "data_size": 65536 00:12:53.111 } 00:12:53.111 ] 00:12:53.111 }' 00:12:53.111 18:43:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:53.111 18:43:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.681 18:43:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:53.681 18:43:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:53.681 18:43:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.681 18:43:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.681 18:43:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.681 18:43:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:53.681 18:43:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:53.681 18:43:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.681 18:43:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.681 18:43:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:53.681 18:43:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.681 18:43:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 59da53f7-ab10-40f4-b95c-46b33d4d50c6 00:12:53.681 18:43:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.681 18:43:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.681 [2024-12-15 18:43:53.929580] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:53.681 [2024-12-15 18:43:53.929624] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:12:53.681 [2024-12-15 18:43:53.929633] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:12:53.681 [2024-12-15 18:43:53.929887] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:12:53.681 [2024-12-15 18:43:53.930291] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:12:53.681 [2024-12-15 18:43:53.930302] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:12:53.681 [2024-12-15 18:43:53.930497] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:53.681 NewBaseBdev 00:12:53.681 18:43:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.681 18:43:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:53.681 18:43:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:12:53.681 18:43:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:53.681 18:43:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:53.681 18:43:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:53.681 18:43:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:53.681 18:43:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:53.681 18:43:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.681 18:43:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.681 18:43:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.681 18:43:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:53.681 18:43:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.681 18:43:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.681 [ 00:12:53.681 { 00:12:53.681 "name": "NewBaseBdev", 00:12:53.681 "aliases": [ 00:12:53.681 "59da53f7-ab10-40f4-b95c-46b33d4d50c6" 00:12:53.681 ], 00:12:53.681 "product_name": "Malloc disk", 00:12:53.681 "block_size": 512, 00:12:53.681 "num_blocks": 65536, 00:12:53.681 "uuid": "59da53f7-ab10-40f4-b95c-46b33d4d50c6", 00:12:53.681 "assigned_rate_limits": { 00:12:53.681 "rw_ios_per_sec": 0, 00:12:53.681 "rw_mbytes_per_sec": 0, 00:12:53.681 "r_mbytes_per_sec": 0, 00:12:53.681 "w_mbytes_per_sec": 0 00:12:53.681 }, 00:12:53.681 "claimed": true, 00:12:53.681 "claim_type": "exclusive_write", 00:12:53.681 "zoned": false, 00:12:53.681 "supported_io_types": { 00:12:53.681 "read": true, 00:12:53.681 "write": true, 00:12:53.681 "unmap": true, 00:12:53.681 "flush": true, 00:12:53.681 "reset": true, 00:12:53.681 "nvme_admin": false, 00:12:53.681 "nvme_io": false, 00:12:53.681 "nvme_io_md": false, 00:12:53.681 "write_zeroes": true, 00:12:53.681 "zcopy": true, 00:12:53.681 "get_zone_info": false, 00:12:53.681 "zone_management": false, 00:12:53.681 "zone_append": false, 00:12:53.681 "compare": false, 00:12:53.681 "compare_and_write": false, 00:12:53.681 "abort": true, 00:12:53.681 "seek_hole": false, 00:12:53.681 "seek_data": false, 00:12:53.681 "copy": true, 00:12:53.681 "nvme_iov_md": false 00:12:53.681 }, 00:12:53.681 "memory_domains": [ 00:12:53.681 { 00:12:53.681 "dma_device_id": "system", 00:12:53.681 "dma_device_type": 1 00:12:53.681 }, 00:12:53.681 { 00:12:53.681 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:53.681 "dma_device_type": 2 00:12:53.681 } 00:12:53.681 ], 00:12:53.681 "driver_specific": {} 00:12:53.681 } 00:12:53.681 ] 00:12:53.681 18:43:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.681 18:43:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:53.681 18:43:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:12:53.681 18:43:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:53.682 18:43:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:53.682 18:43:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:53.682 18:43:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:53.682 18:43:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:53.682 18:43:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:53.682 18:43:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:53.682 18:43:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:53.682 18:43:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:53.682 18:43:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:53.682 18:43:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.682 18:43:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.682 18:43:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:53.682 18:43:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.682 18:43:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:53.682 "name": "Existed_Raid", 00:12:53.682 "uuid": "d13d109e-7c3a-4c35-ab6f-f15c527f5b03", 00:12:53.682 "strip_size_kb": 64, 00:12:53.682 "state": "online", 00:12:53.682 "raid_level": "raid5f", 00:12:53.682 "superblock": false, 00:12:53.682 "num_base_bdevs": 3, 00:12:53.682 "num_base_bdevs_discovered": 3, 00:12:53.682 "num_base_bdevs_operational": 3, 00:12:53.682 "base_bdevs_list": [ 00:12:53.682 { 00:12:53.682 "name": "NewBaseBdev", 00:12:53.682 "uuid": "59da53f7-ab10-40f4-b95c-46b33d4d50c6", 00:12:53.682 "is_configured": true, 00:12:53.682 "data_offset": 0, 00:12:53.682 "data_size": 65536 00:12:53.682 }, 00:12:53.682 { 00:12:53.682 "name": "BaseBdev2", 00:12:53.682 "uuid": "cc51ff70-eda9-46e0-9c7f-3bcee85b2495", 00:12:53.682 "is_configured": true, 00:12:53.682 "data_offset": 0, 00:12:53.682 "data_size": 65536 00:12:53.682 }, 00:12:53.682 { 00:12:53.682 "name": "BaseBdev3", 00:12:53.682 "uuid": "048c53c1-d737-4809-bd40-970e54d79352", 00:12:53.682 "is_configured": true, 00:12:53.682 "data_offset": 0, 00:12:53.682 "data_size": 65536 00:12:53.682 } 00:12:53.682 ] 00:12:53.682 }' 00:12:53.682 18:43:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:53.682 18:43:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.253 18:43:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:54.253 18:43:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:54.253 18:43:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:54.253 18:43:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:54.253 18:43:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:54.253 18:43:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:54.253 18:43:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:54.253 18:43:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:54.253 18:43:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.253 18:43:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.253 [2024-12-15 18:43:54.397028] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:54.253 18:43:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.253 18:43:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:54.253 "name": "Existed_Raid", 00:12:54.253 "aliases": [ 00:12:54.253 "d13d109e-7c3a-4c35-ab6f-f15c527f5b03" 00:12:54.253 ], 00:12:54.253 "product_name": "Raid Volume", 00:12:54.253 "block_size": 512, 00:12:54.253 "num_blocks": 131072, 00:12:54.253 "uuid": "d13d109e-7c3a-4c35-ab6f-f15c527f5b03", 00:12:54.253 "assigned_rate_limits": { 00:12:54.253 "rw_ios_per_sec": 0, 00:12:54.253 "rw_mbytes_per_sec": 0, 00:12:54.253 "r_mbytes_per_sec": 0, 00:12:54.253 "w_mbytes_per_sec": 0 00:12:54.253 }, 00:12:54.253 "claimed": false, 00:12:54.253 "zoned": false, 00:12:54.253 "supported_io_types": { 00:12:54.253 "read": true, 00:12:54.253 "write": true, 00:12:54.253 "unmap": false, 00:12:54.253 "flush": false, 00:12:54.253 "reset": true, 00:12:54.253 "nvme_admin": false, 00:12:54.253 "nvme_io": false, 00:12:54.253 "nvme_io_md": false, 00:12:54.253 "write_zeroes": true, 00:12:54.253 "zcopy": false, 00:12:54.253 "get_zone_info": false, 00:12:54.253 "zone_management": false, 00:12:54.253 "zone_append": false, 00:12:54.253 "compare": false, 00:12:54.253 "compare_and_write": false, 00:12:54.253 "abort": false, 00:12:54.253 "seek_hole": false, 00:12:54.253 "seek_data": false, 00:12:54.253 "copy": false, 00:12:54.253 "nvme_iov_md": false 00:12:54.253 }, 00:12:54.253 "driver_specific": { 00:12:54.253 "raid": { 00:12:54.253 "uuid": "d13d109e-7c3a-4c35-ab6f-f15c527f5b03", 00:12:54.253 "strip_size_kb": 64, 00:12:54.253 "state": "online", 00:12:54.253 "raid_level": "raid5f", 00:12:54.253 "superblock": false, 00:12:54.253 "num_base_bdevs": 3, 00:12:54.253 "num_base_bdevs_discovered": 3, 00:12:54.253 "num_base_bdevs_operational": 3, 00:12:54.253 "base_bdevs_list": [ 00:12:54.253 { 00:12:54.253 "name": "NewBaseBdev", 00:12:54.253 "uuid": "59da53f7-ab10-40f4-b95c-46b33d4d50c6", 00:12:54.253 "is_configured": true, 00:12:54.253 "data_offset": 0, 00:12:54.253 "data_size": 65536 00:12:54.253 }, 00:12:54.253 { 00:12:54.253 "name": "BaseBdev2", 00:12:54.253 "uuid": "cc51ff70-eda9-46e0-9c7f-3bcee85b2495", 00:12:54.253 "is_configured": true, 00:12:54.253 "data_offset": 0, 00:12:54.253 "data_size": 65536 00:12:54.253 }, 00:12:54.253 { 00:12:54.253 "name": "BaseBdev3", 00:12:54.253 "uuid": "048c53c1-d737-4809-bd40-970e54d79352", 00:12:54.253 "is_configured": true, 00:12:54.253 "data_offset": 0, 00:12:54.253 "data_size": 65536 00:12:54.253 } 00:12:54.253 ] 00:12:54.253 } 00:12:54.253 } 00:12:54.253 }' 00:12:54.253 18:43:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:54.253 18:43:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:54.253 BaseBdev2 00:12:54.253 BaseBdev3' 00:12:54.253 18:43:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:54.253 18:43:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:54.253 18:43:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:54.253 18:43:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:54.253 18:43:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:54.253 18:43:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.253 18:43:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.253 18:43:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.254 18:43:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:54.254 18:43:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:54.254 18:43:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:54.254 18:43:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:54.254 18:43:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:54.254 18:43:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.254 18:43:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.254 18:43:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.254 18:43:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:54.254 18:43:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:54.254 18:43:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:54.254 18:43:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:54.254 18:43:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:54.254 18:43:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.254 18:43:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.254 18:43:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.254 18:43:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:54.254 18:43:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:54.254 18:43:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:54.254 18:43:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.254 18:43:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.254 [2024-12-15 18:43:54.648605] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:54.254 [2024-12-15 18:43:54.648674] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:54.254 [2024-12-15 18:43:54.648761] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:54.254 [2024-12-15 18:43:54.649023] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:54.254 [2024-12-15 18:43:54.649078] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:12:54.254 18:43:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.254 18:43:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 92388 00:12:54.254 18:43:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 92388 ']' 00:12:54.254 18:43:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 92388 00:12:54.254 18:43:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:12:54.254 18:43:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:54.254 18:43:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 92388 00:12:54.514 18:43:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:54.514 18:43:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:54.514 18:43:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 92388' 00:12:54.514 killing process with pid 92388 00:12:54.514 18:43:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 92388 00:12:54.514 [2024-12-15 18:43:54.700174] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:54.514 18:43:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 92388 00:12:54.514 [2024-12-15 18:43:54.731964] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:54.514 18:43:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:12:54.514 00:12:54.514 real 0m8.667s 00:12:54.514 user 0m14.721s 00:12:54.514 sys 0m1.929s 00:12:54.514 18:43:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:54.514 ************************************ 00:12:54.514 END TEST raid5f_state_function_test 00:12:54.514 ************************************ 00:12:54.514 18:43:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.774 18:43:55 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:12:54.774 18:43:55 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:54.774 18:43:55 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:54.774 18:43:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:54.774 ************************************ 00:12:54.774 START TEST raid5f_state_function_test_sb 00:12:54.774 ************************************ 00:12:54.774 18:43:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 true 00:12:54.774 18:43:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:12:54.774 18:43:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:12:54.774 18:43:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:12:54.774 18:43:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:54.774 18:43:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:54.774 18:43:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:54.774 18:43:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:54.774 18:43:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:54.774 18:43:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:54.774 18:43:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:54.774 18:43:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:54.774 18:43:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:54.774 18:43:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:54.774 18:43:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:54.774 18:43:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:54.774 18:43:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:12:54.774 18:43:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:54.774 18:43:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:54.774 18:43:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:54.774 18:43:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:54.774 18:43:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:54.774 18:43:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:12:54.774 18:43:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:12:54.774 18:43:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:12:54.774 18:43:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:12:54.774 18:43:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:12:54.774 18:43:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=92987 00:12:54.774 18:43:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:54.774 18:43:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 92987' 00:12:54.774 Process raid pid: 92987 00:12:54.774 18:43:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 92987 00:12:54.774 18:43:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 92987 ']' 00:12:54.774 18:43:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:54.774 18:43:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:54.774 18:43:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:54.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:54.774 18:43:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:54.774 18:43:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.774 [2024-12-15 18:43:55.124408] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:12:54.774 [2024-12-15 18:43:55.124674] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:55.034 [2024-12-15 18:43:55.297204] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:55.034 [2024-12-15 18:43:55.322141] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:12:55.034 [2024-12-15 18:43:55.364238] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:55.034 [2024-12-15 18:43:55.364349] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:55.602 18:43:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:55.602 18:43:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:12:55.602 18:43:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:55.602 18:43:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.602 18:43:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.602 [2024-12-15 18:43:55.959294] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:55.602 [2024-12-15 18:43:55.959410] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:55.602 [2024-12-15 18:43:55.959424] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:55.602 [2024-12-15 18:43:55.959435] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:55.602 [2024-12-15 18:43:55.959441] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:55.602 [2024-12-15 18:43:55.959451] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:55.602 18:43:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.602 18:43:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:12:55.602 18:43:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:55.602 18:43:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:55.602 18:43:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:55.602 18:43:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:55.602 18:43:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:55.602 18:43:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:55.602 18:43:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:55.602 18:43:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:55.602 18:43:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:55.602 18:43:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:55.602 18:43:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:55.603 18:43:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.603 18:43:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.603 18:43:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.603 18:43:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:55.603 "name": "Existed_Raid", 00:12:55.603 "uuid": "c5d0d702-e5ab-4f77-b215-7145c976d8cb", 00:12:55.603 "strip_size_kb": 64, 00:12:55.603 "state": "configuring", 00:12:55.603 "raid_level": "raid5f", 00:12:55.603 "superblock": true, 00:12:55.603 "num_base_bdevs": 3, 00:12:55.603 "num_base_bdevs_discovered": 0, 00:12:55.603 "num_base_bdevs_operational": 3, 00:12:55.603 "base_bdevs_list": [ 00:12:55.603 { 00:12:55.603 "name": "BaseBdev1", 00:12:55.603 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:55.603 "is_configured": false, 00:12:55.603 "data_offset": 0, 00:12:55.603 "data_size": 0 00:12:55.603 }, 00:12:55.603 { 00:12:55.603 "name": "BaseBdev2", 00:12:55.603 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:55.603 "is_configured": false, 00:12:55.603 "data_offset": 0, 00:12:55.603 "data_size": 0 00:12:55.603 }, 00:12:55.603 { 00:12:55.603 "name": "BaseBdev3", 00:12:55.603 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:55.603 "is_configured": false, 00:12:55.603 "data_offset": 0, 00:12:55.603 "data_size": 0 00:12:55.603 } 00:12:55.603 ] 00:12:55.603 }' 00:12:55.603 18:43:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:55.603 18:43:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.172 18:43:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:56.172 18:43:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.172 18:43:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.172 [2024-12-15 18:43:56.390437] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:56.172 [2024-12-15 18:43:56.390515] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:12:56.172 18:43:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.172 18:43:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:56.172 18:43:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.172 18:43:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.172 [2024-12-15 18:43:56.398449] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:56.172 [2024-12-15 18:43:56.398522] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:56.172 [2024-12-15 18:43:56.398549] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:56.172 [2024-12-15 18:43:56.398572] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:56.172 [2024-12-15 18:43:56.398590] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:56.172 [2024-12-15 18:43:56.398610] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:56.172 18:43:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.172 18:43:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:56.172 18:43:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.172 18:43:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.172 [2024-12-15 18:43:56.415254] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:56.172 BaseBdev1 00:12:56.172 18:43:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.172 18:43:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:56.172 18:43:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:56.172 18:43:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:56.172 18:43:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:56.172 18:43:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:56.172 18:43:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:56.172 18:43:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:56.172 18:43:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.172 18:43:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.172 18:43:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.172 18:43:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:56.172 18:43:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.172 18:43:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.172 [ 00:12:56.172 { 00:12:56.172 "name": "BaseBdev1", 00:12:56.172 "aliases": [ 00:12:56.172 "c42fd7c0-eee8-4efd-aa7e-030ce86c18dc" 00:12:56.172 ], 00:12:56.172 "product_name": "Malloc disk", 00:12:56.172 "block_size": 512, 00:12:56.172 "num_blocks": 65536, 00:12:56.172 "uuid": "c42fd7c0-eee8-4efd-aa7e-030ce86c18dc", 00:12:56.172 "assigned_rate_limits": { 00:12:56.172 "rw_ios_per_sec": 0, 00:12:56.172 "rw_mbytes_per_sec": 0, 00:12:56.172 "r_mbytes_per_sec": 0, 00:12:56.172 "w_mbytes_per_sec": 0 00:12:56.172 }, 00:12:56.172 "claimed": true, 00:12:56.172 "claim_type": "exclusive_write", 00:12:56.172 "zoned": false, 00:12:56.172 "supported_io_types": { 00:12:56.172 "read": true, 00:12:56.172 "write": true, 00:12:56.172 "unmap": true, 00:12:56.172 "flush": true, 00:12:56.172 "reset": true, 00:12:56.172 "nvme_admin": false, 00:12:56.172 "nvme_io": false, 00:12:56.172 "nvme_io_md": false, 00:12:56.172 "write_zeroes": true, 00:12:56.172 "zcopy": true, 00:12:56.172 "get_zone_info": false, 00:12:56.172 "zone_management": false, 00:12:56.172 "zone_append": false, 00:12:56.172 "compare": false, 00:12:56.172 "compare_and_write": false, 00:12:56.172 "abort": true, 00:12:56.172 "seek_hole": false, 00:12:56.172 "seek_data": false, 00:12:56.172 "copy": true, 00:12:56.172 "nvme_iov_md": false 00:12:56.172 }, 00:12:56.172 "memory_domains": [ 00:12:56.172 { 00:12:56.172 "dma_device_id": "system", 00:12:56.172 "dma_device_type": 1 00:12:56.172 }, 00:12:56.172 { 00:12:56.172 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:56.172 "dma_device_type": 2 00:12:56.172 } 00:12:56.172 ], 00:12:56.172 "driver_specific": {} 00:12:56.172 } 00:12:56.172 ] 00:12:56.172 18:43:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.172 18:43:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:56.172 18:43:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:12:56.172 18:43:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:56.172 18:43:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:56.172 18:43:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:56.172 18:43:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:56.172 18:43:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:56.172 18:43:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:56.172 18:43:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:56.172 18:43:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:56.172 18:43:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:56.172 18:43:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.172 18:43:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:56.172 18:43:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.172 18:43:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.172 18:43:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.172 18:43:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:56.172 "name": "Existed_Raid", 00:12:56.172 "uuid": "be2cb530-b6fd-439e-9c86-36f430154f6f", 00:12:56.172 "strip_size_kb": 64, 00:12:56.172 "state": "configuring", 00:12:56.172 "raid_level": "raid5f", 00:12:56.172 "superblock": true, 00:12:56.172 "num_base_bdevs": 3, 00:12:56.172 "num_base_bdevs_discovered": 1, 00:12:56.172 "num_base_bdevs_operational": 3, 00:12:56.172 "base_bdevs_list": [ 00:12:56.172 { 00:12:56.172 "name": "BaseBdev1", 00:12:56.172 "uuid": "c42fd7c0-eee8-4efd-aa7e-030ce86c18dc", 00:12:56.172 "is_configured": true, 00:12:56.172 "data_offset": 2048, 00:12:56.172 "data_size": 63488 00:12:56.172 }, 00:12:56.172 { 00:12:56.172 "name": "BaseBdev2", 00:12:56.172 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:56.172 "is_configured": false, 00:12:56.172 "data_offset": 0, 00:12:56.172 "data_size": 0 00:12:56.172 }, 00:12:56.172 { 00:12:56.172 "name": "BaseBdev3", 00:12:56.172 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:56.172 "is_configured": false, 00:12:56.172 "data_offset": 0, 00:12:56.172 "data_size": 0 00:12:56.172 } 00:12:56.172 ] 00:12:56.172 }' 00:12:56.172 18:43:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:56.172 18:43:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.776 18:43:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:56.776 18:43:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.776 18:43:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.776 [2024-12-15 18:43:56.886456] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:56.776 [2024-12-15 18:43:56.886543] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:12:56.776 18:43:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.776 18:43:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:56.776 18:43:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.776 18:43:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.776 [2024-12-15 18:43:56.894495] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:56.776 [2024-12-15 18:43:56.896356] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:56.776 [2024-12-15 18:43:56.896429] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:56.776 [2024-12-15 18:43:56.896456] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:56.776 [2024-12-15 18:43:56.896478] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:56.776 18:43:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.776 18:43:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:56.776 18:43:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:56.776 18:43:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:12:56.776 18:43:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:56.776 18:43:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:56.776 18:43:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:56.776 18:43:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:56.776 18:43:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:56.776 18:43:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:56.776 18:43:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:56.776 18:43:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:56.776 18:43:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:56.776 18:43:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.776 18:43:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:56.776 18:43:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.776 18:43:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.776 18:43:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.776 18:43:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:56.776 "name": "Existed_Raid", 00:12:56.776 "uuid": "ae652c77-f53b-44d6-8aa9-63d80c26b293", 00:12:56.776 "strip_size_kb": 64, 00:12:56.776 "state": "configuring", 00:12:56.776 "raid_level": "raid5f", 00:12:56.776 "superblock": true, 00:12:56.776 "num_base_bdevs": 3, 00:12:56.776 "num_base_bdevs_discovered": 1, 00:12:56.776 "num_base_bdevs_operational": 3, 00:12:56.776 "base_bdevs_list": [ 00:12:56.776 { 00:12:56.776 "name": "BaseBdev1", 00:12:56.776 "uuid": "c42fd7c0-eee8-4efd-aa7e-030ce86c18dc", 00:12:56.776 "is_configured": true, 00:12:56.776 "data_offset": 2048, 00:12:56.776 "data_size": 63488 00:12:56.776 }, 00:12:56.776 { 00:12:56.776 "name": "BaseBdev2", 00:12:56.776 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:56.776 "is_configured": false, 00:12:56.776 "data_offset": 0, 00:12:56.776 "data_size": 0 00:12:56.776 }, 00:12:56.776 { 00:12:56.776 "name": "BaseBdev3", 00:12:56.776 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:56.776 "is_configured": false, 00:12:56.776 "data_offset": 0, 00:12:56.776 "data_size": 0 00:12:56.776 } 00:12:56.776 ] 00:12:56.776 }' 00:12:56.776 18:43:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:56.776 18:43:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:57.036 18:43:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:57.036 18:43:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.036 18:43:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:57.036 [2024-12-15 18:43:57.332740] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:57.036 BaseBdev2 00:12:57.036 18:43:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.036 18:43:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:57.036 18:43:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:57.036 18:43:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:57.036 18:43:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:57.036 18:43:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:57.036 18:43:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:57.036 18:43:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:57.036 18:43:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.036 18:43:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:57.036 18:43:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.036 18:43:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:57.036 18:43:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.036 18:43:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:57.036 [ 00:12:57.036 { 00:12:57.036 "name": "BaseBdev2", 00:12:57.036 "aliases": [ 00:12:57.036 "23ed3582-4d90-4ccb-b8fc-7856daca64d9" 00:12:57.036 ], 00:12:57.036 "product_name": "Malloc disk", 00:12:57.036 "block_size": 512, 00:12:57.036 "num_blocks": 65536, 00:12:57.036 "uuid": "23ed3582-4d90-4ccb-b8fc-7856daca64d9", 00:12:57.036 "assigned_rate_limits": { 00:12:57.036 "rw_ios_per_sec": 0, 00:12:57.036 "rw_mbytes_per_sec": 0, 00:12:57.036 "r_mbytes_per_sec": 0, 00:12:57.036 "w_mbytes_per_sec": 0 00:12:57.036 }, 00:12:57.036 "claimed": true, 00:12:57.036 "claim_type": "exclusive_write", 00:12:57.036 "zoned": false, 00:12:57.036 "supported_io_types": { 00:12:57.036 "read": true, 00:12:57.036 "write": true, 00:12:57.036 "unmap": true, 00:12:57.036 "flush": true, 00:12:57.036 "reset": true, 00:12:57.036 "nvme_admin": false, 00:12:57.036 "nvme_io": false, 00:12:57.036 "nvme_io_md": false, 00:12:57.036 "write_zeroes": true, 00:12:57.036 "zcopy": true, 00:12:57.036 "get_zone_info": false, 00:12:57.036 "zone_management": false, 00:12:57.036 "zone_append": false, 00:12:57.036 "compare": false, 00:12:57.036 "compare_and_write": false, 00:12:57.036 "abort": true, 00:12:57.036 "seek_hole": false, 00:12:57.036 "seek_data": false, 00:12:57.036 "copy": true, 00:12:57.036 "nvme_iov_md": false 00:12:57.036 }, 00:12:57.036 "memory_domains": [ 00:12:57.036 { 00:12:57.036 "dma_device_id": "system", 00:12:57.036 "dma_device_type": 1 00:12:57.036 }, 00:12:57.036 { 00:12:57.036 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:57.036 "dma_device_type": 2 00:12:57.036 } 00:12:57.036 ], 00:12:57.036 "driver_specific": {} 00:12:57.036 } 00:12:57.036 ] 00:12:57.036 18:43:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.036 18:43:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:57.036 18:43:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:57.036 18:43:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:57.036 18:43:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:12:57.036 18:43:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:57.036 18:43:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:57.036 18:43:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:57.036 18:43:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:57.036 18:43:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:57.036 18:43:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:57.036 18:43:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:57.036 18:43:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:57.036 18:43:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:57.036 18:43:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:57.036 18:43:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:57.036 18:43:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.036 18:43:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:57.036 18:43:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.036 18:43:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:57.036 "name": "Existed_Raid", 00:12:57.036 "uuid": "ae652c77-f53b-44d6-8aa9-63d80c26b293", 00:12:57.036 "strip_size_kb": 64, 00:12:57.036 "state": "configuring", 00:12:57.036 "raid_level": "raid5f", 00:12:57.036 "superblock": true, 00:12:57.036 "num_base_bdevs": 3, 00:12:57.036 "num_base_bdevs_discovered": 2, 00:12:57.036 "num_base_bdevs_operational": 3, 00:12:57.036 "base_bdevs_list": [ 00:12:57.036 { 00:12:57.036 "name": "BaseBdev1", 00:12:57.036 "uuid": "c42fd7c0-eee8-4efd-aa7e-030ce86c18dc", 00:12:57.036 "is_configured": true, 00:12:57.036 "data_offset": 2048, 00:12:57.036 "data_size": 63488 00:12:57.036 }, 00:12:57.036 { 00:12:57.036 "name": "BaseBdev2", 00:12:57.036 "uuid": "23ed3582-4d90-4ccb-b8fc-7856daca64d9", 00:12:57.036 "is_configured": true, 00:12:57.036 "data_offset": 2048, 00:12:57.036 "data_size": 63488 00:12:57.036 }, 00:12:57.036 { 00:12:57.036 "name": "BaseBdev3", 00:12:57.036 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:57.036 "is_configured": false, 00:12:57.036 "data_offset": 0, 00:12:57.036 "data_size": 0 00:12:57.036 } 00:12:57.036 ] 00:12:57.036 }' 00:12:57.036 18:43:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:57.036 18:43:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:57.606 18:43:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:57.606 18:43:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.606 18:43:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:57.606 [2024-12-15 18:43:57.875320] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:57.606 [2024-12-15 18:43:57.876143] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:12:57.606 BaseBdev3 00:12:57.606 [2024-12-15 18:43:57.876332] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:12:57.606 18:43:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.606 [2024-12-15 18:43:57.877396] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:12:57.606 18:43:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:57.606 18:43:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:57.606 18:43:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:57.606 18:43:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:57.606 18:43:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:57.606 [2024-12-15 18:43:57.878944] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:12:57.606 [2024-12-15 18:43:57.878983] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:12:57.606 18:43:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:57.606 18:43:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:57.606 [2024-12-15 18:43:57.879331] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:57.606 18:43:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.606 18:43:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:57.606 18:43:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.606 18:43:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:57.606 18:43:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.606 18:43:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:57.606 [ 00:12:57.606 { 00:12:57.606 "name": "BaseBdev3", 00:12:57.606 "aliases": [ 00:12:57.606 "1b98b07d-fd04-48c4-ac54-b980851a7123" 00:12:57.606 ], 00:12:57.606 "product_name": "Malloc disk", 00:12:57.606 "block_size": 512, 00:12:57.606 "num_blocks": 65536, 00:12:57.606 "uuid": "1b98b07d-fd04-48c4-ac54-b980851a7123", 00:12:57.606 "assigned_rate_limits": { 00:12:57.606 "rw_ios_per_sec": 0, 00:12:57.606 "rw_mbytes_per_sec": 0, 00:12:57.606 "r_mbytes_per_sec": 0, 00:12:57.606 "w_mbytes_per_sec": 0 00:12:57.606 }, 00:12:57.606 "claimed": true, 00:12:57.606 "claim_type": "exclusive_write", 00:12:57.606 "zoned": false, 00:12:57.606 "supported_io_types": { 00:12:57.606 "read": true, 00:12:57.606 "write": true, 00:12:57.606 "unmap": true, 00:12:57.606 "flush": true, 00:12:57.606 "reset": true, 00:12:57.606 "nvme_admin": false, 00:12:57.606 "nvme_io": false, 00:12:57.606 "nvme_io_md": false, 00:12:57.606 "write_zeroes": true, 00:12:57.606 "zcopy": true, 00:12:57.606 "get_zone_info": false, 00:12:57.606 "zone_management": false, 00:12:57.606 "zone_append": false, 00:12:57.606 "compare": false, 00:12:57.606 "compare_and_write": false, 00:12:57.606 "abort": true, 00:12:57.606 "seek_hole": false, 00:12:57.606 "seek_data": false, 00:12:57.606 "copy": true, 00:12:57.606 "nvme_iov_md": false 00:12:57.606 }, 00:12:57.606 "memory_domains": [ 00:12:57.606 { 00:12:57.606 "dma_device_id": "system", 00:12:57.606 "dma_device_type": 1 00:12:57.606 }, 00:12:57.606 { 00:12:57.606 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:57.606 "dma_device_type": 2 00:12:57.606 } 00:12:57.606 ], 00:12:57.606 "driver_specific": {} 00:12:57.606 } 00:12:57.606 ] 00:12:57.606 18:43:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.606 18:43:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:57.606 18:43:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:57.606 18:43:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:57.607 18:43:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:12:57.607 18:43:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:57.607 18:43:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:57.607 18:43:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:57.607 18:43:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:57.607 18:43:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:57.607 18:43:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:57.607 18:43:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:57.607 18:43:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:57.607 18:43:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:57.607 18:43:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:57.607 18:43:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:57.607 18:43:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.607 18:43:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:57.607 18:43:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.607 18:43:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:57.607 "name": "Existed_Raid", 00:12:57.607 "uuid": "ae652c77-f53b-44d6-8aa9-63d80c26b293", 00:12:57.607 "strip_size_kb": 64, 00:12:57.607 "state": "online", 00:12:57.607 "raid_level": "raid5f", 00:12:57.607 "superblock": true, 00:12:57.607 "num_base_bdevs": 3, 00:12:57.607 "num_base_bdevs_discovered": 3, 00:12:57.607 "num_base_bdevs_operational": 3, 00:12:57.607 "base_bdevs_list": [ 00:12:57.607 { 00:12:57.607 "name": "BaseBdev1", 00:12:57.607 "uuid": "c42fd7c0-eee8-4efd-aa7e-030ce86c18dc", 00:12:57.607 "is_configured": true, 00:12:57.607 "data_offset": 2048, 00:12:57.607 "data_size": 63488 00:12:57.607 }, 00:12:57.607 { 00:12:57.607 "name": "BaseBdev2", 00:12:57.607 "uuid": "23ed3582-4d90-4ccb-b8fc-7856daca64d9", 00:12:57.607 "is_configured": true, 00:12:57.607 "data_offset": 2048, 00:12:57.607 "data_size": 63488 00:12:57.607 }, 00:12:57.607 { 00:12:57.607 "name": "BaseBdev3", 00:12:57.607 "uuid": "1b98b07d-fd04-48c4-ac54-b980851a7123", 00:12:57.607 "is_configured": true, 00:12:57.607 "data_offset": 2048, 00:12:57.607 "data_size": 63488 00:12:57.607 } 00:12:57.607 ] 00:12:57.607 }' 00:12:57.607 18:43:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:57.607 18:43:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.181 18:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:58.181 18:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:58.181 18:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:58.182 18:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:58.182 18:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:58.182 18:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:58.182 18:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:58.182 18:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:58.182 18:43:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.182 18:43:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.182 [2024-12-15 18:43:58.362677] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:58.182 18:43:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.182 18:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:58.182 "name": "Existed_Raid", 00:12:58.182 "aliases": [ 00:12:58.182 "ae652c77-f53b-44d6-8aa9-63d80c26b293" 00:12:58.182 ], 00:12:58.182 "product_name": "Raid Volume", 00:12:58.182 "block_size": 512, 00:12:58.182 "num_blocks": 126976, 00:12:58.182 "uuid": "ae652c77-f53b-44d6-8aa9-63d80c26b293", 00:12:58.182 "assigned_rate_limits": { 00:12:58.182 "rw_ios_per_sec": 0, 00:12:58.182 "rw_mbytes_per_sec": 0, 00:12:58.182 "r_mbytes_per_sec": 0, 00:12:58.182 "w_mbytes_per_sec": 0 00:12:58.182 }, 00:12:58.182 "claimed": false, 00:12:58.182 "zoned": false, 00:12:58.182 "supported_io_types": { 00:12:58.182 "read": true, 00:12:58.182 "write": true, 00:12:58.182 "unmap": false, 00:12:58.182 "flush": false, 00:12:58.182 "reset": true, 00:12:58.182 "nvme_admin": false, 00:12:58.182 "nvme_io": false, 00:12:58.182 "nvme_io_md": false, 00:12:58.182 "write_zeroes": true, 00:12:58.182 "zcopy": false, 00:12:58.182 "get_zone_info": false, 00:12:58.182 "zone_management": false, 00:12:58.182 "zone_append": false, 00:12:58.182 "compare": false, 00:12:58.182 "compare_and_write": false, 00:12:58.182 "abort": false, 00:12:58.182 "seek_hole": false, 00:12:58.182 "seek_data": false, 00:12:58.182 "copy": false, 00:12:58.182 "nvme_iov_md": false 00:12:58.182 }, 00:12:58.182 "driver_specific": { 00:12:58.182 "raid": { 00:12:58.182 "uuid": "ae652c77-f53b-44d6-8aa9-63d80c26b293", 00:12:58.182 "strip_size_kb": 64, 00:12:58.182 "state": "online", 00:12:58.182 "raid_level": "raid5f", 00:12:58.182 "superblock": true, 00:12:58.182 "num_base_bdevs": 3, 00:12:58.182 "num_base_bdevs_discovered": 3, 00:12:58.182 "num_base_bdevs_operational": 3, 00:12:58.182 "base_bdevs_list": [ 00:12:58.182 { 00:12:58.182 "name": "BaseBdev1", 00:12:58.182 "uuid": "c42fd7c0-eee8-4efd-aa7e-030ce86c18dc", 00:12:58.182 "is_configured": true, 00:12:58.182 "data_offset": 2048, 00:12:58.182 "data_size": 63488 00:12:58.182 }, 00:12:58.182 { 00:12:58.182 "name": "BaseBdev2", 00:12:58.182 "uuid": "23ed3582-4d90-4ccb-b8fc-7856daca64d9", 00:12:58.182 "is_configured": true, 00:12:58.182 "data_offset": 2048, 00:12:58.182 "data_size": 63488 00:12:58.182 }, 00:12:58.182 { 00:12:58.182 "name": "BaseBdev3", 00:12:58.182 "uuid": "1b98b07d-fd04-48c4-ac54-b980851a7123", 00:12:58.182 "is_configured": true, 00:12:58.182 "data_offset": 2048, 00:12:58.182 "data_size": 63488 00:12:58.182 } 00:12:58.182 ] 00:12:58.182 } 00:12:58.182 } 00:12:58.182 }' 00:12:58.182 18:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:58.182 18:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:58.182 BaseBdev2 00:12:58.182 BaseBdev3' 00:12:58.182 18:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:58.182 18:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:58.182 18:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:58.182 18:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:58.182 18:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:58.182 18:43:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.182 18:43:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.182 18:43:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.182 18:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:58.182 18:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:58.182 18:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:58.182 18:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:58.182 18:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:58.182 18:43:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.182 18:43:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.182 18:43:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.182 18:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:58.182 18:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:58.182 18:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:58.182 18:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:58.182 18:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:58.182 18:43:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.182 18:43:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.442 18:43:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.442 18:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:58.442 18:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:58.442 18:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:58.442 18:43:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.442 18:43:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.442 [2024-12-15 18:43:58.658003] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:58.442 18:43:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.442 18:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:58.442 18:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:12:58.442 18:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:58.442 18:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:12:58.442 18:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:12:58.442 18:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:12:58.442 18:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:58.442 18:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:58.442 18:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:58.442 18:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:58.442 18:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:58.442 18:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:58.442 18:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:58.442 18:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:58.442 18:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:58.442 18:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.442 18:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:58.442 18:43:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.442 18:43:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.442 18:43:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.442 18:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:58.442 "name": "Existed_Raid", 00:12:58.442 "uuid": "ae652c77-f53b-44d6-8aa9-63d80c26b293", 00:12:58.442 "strip_size_kb": 64, 00:12:58.442 "state": "online", 00:12:58.442 "raid_level": "raid5f", 00:12:58.442 "superblock": true, 00:12:58.442 "num_base_bdevs": 3, 00:12:58.442 "num_base_bdevs_discovered": 2, 00:12:58.442 "num_base_bdevs_operational": 2, 00:12:58.442 "base_bdevs_list": [ 00:12:58.442 { 00:12:58.442 "name": null, 00:12:58.442 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:58.442 "is_configured": false, 00:12:58.442 "data_offset": 0, 00:12:58.442 "data_size": 63488 00:12:58.442 }, 00:12:58.442 { 00:12:58.442 "name": "BaseBdev2", 00:12:58.442 "uuid": "23ed3582-4d90-4ccb-b8fc-7856daca64d9", 00:12:58.442 "is_configured": true, 00:12:58.442 "data_offset": 2048, 00:12:58.442 "data_size": 63488 00:12:58.442 }, 00:12:58.442 { 00:12:58.442 "name": "BaseBdev3", 00:12:58.442 "uuid": "1b98b07d-fd04-48c4-ac54-b980851a7123", 00:12:58.442 "is_configured": true, 00:12:58.442 "data_offset": 2048, 00:12:58.442 "data_size": 63488 00:12:58.442 } 00:12:58.442 ] 00:12:58.442 }' 00:12:58.442 18:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:58.442 18:43:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.702 18:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:58.702 18:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:58.702 18:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.702 18:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:58.702 18:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.702 18:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.702 18:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.702 18:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:58.702 18:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:58.702 18:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:58.703 18:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.703 18:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.703 [2024-12-15 18:43:59.132735] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:58.703 [2024-12-15 18:43:59.132925] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:58.963 [2024-12-15 18:43:59.144081] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:58.963 18:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.963 18:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:58.963 18:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:58.963 18:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.963 18:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:58.963 18:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.963 18:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.963 18:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.963 18:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:58.963 18:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:58.963 18:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:58.963 18:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.963 18:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.963 [2024-12-15 18:43:59.203999] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:58.963 [2024-12-15 18:43:59.204084] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:12:58.963 18:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.963 18:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:58.963 18:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:58.963 18:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:58.963 18:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.963 18:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.963 18:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.963 18:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.963 18:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:58.963 18:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:58.963 18:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:12:58.964 18:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:58.964 18:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:58.964 18:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:58.964 18:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.964 18:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.964 BaseBdev2 00:12:58.964 18:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.964 18:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:58.964 18:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:58.964 18:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:58.964 18:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:58.964 18:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:58.964 18:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:58.964 18:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:58.964 18:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.964 18:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.964 18:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.964 18:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:58.964 18:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.964 18:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.964 [ 00:12:58.964 { 00:12:58.964 "name": "BaseBdev2", 00:12:58.964 "aliases": [ 00:12:58.964 "cc1f782b-664c-4143-9297-38388b5a6c85" 00:12:58.964 ], 00:12:58.964 "product_name": "Malloc disk", 00:12:58.964 "block_size": 512, 00:12:58.964 "num_blocks": 65536, 00:12:58.964 "uuid": "cc1f782b-664c-4143-9297-38388b5a6c85", 00:12:58.964 "assigned_rate_limits": { 00:12:58.964 "rw_ios_per_sec": 0, 00:12:58.964 "rw_mbytes_per_sec": 0, 00:12:58.964 "r_mbytes_per_sec": 0, 00:12:58.964 "w_mbytes_per_sec": 0 00:12:58.964 }, 00:12:58.964 "claimed": false, 00:12:58.964 "zoned": false, 00:12:58.964 "supported_io_types": { 00:12:58.964 "read": true, 00:12:58.964 "write": true, 00:12:58.964 "unmap": true, 00:12:58.964 "flush": true, 00:12:58.964 "reset": true, 00:12:58.964 "nvme_admin": false, 00:12:58.964 "nvme_io": false, 00:12:58.964 "nvme_io_md": false, 00:12:58.964 "write_zeroes": true, 00:12:58.964 "zcopy": true, 00:12:58.964 "get_zone_info": false, 00:12:58.964 "zone_management": false, 00:12:58.964 "zone_append": false, 00:12:58.964 "compare": false, 00:12:58.964 "compare_and_write": false, 00:12:58.964 "abort": true, 00:12:58.964 "seek_hole": false, 00:12:58.964 "seek_data": false, 00:12:58.964 "copy": true, 00:12:58.964 "nvme_iov_md": false 00:12:58.964 }, 00:12:58.964 "memory_domains": [ 00:12:58.964 { 00:12:58.964 "dma_device_id": "system", 00:12:58.964 "dma_device_type": 1 00:12:58.964 }, 00:12:58.964 { 00:12:58.964 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:58.964 "dma_device_type": 2 00:12:58.964 } 00:12:58.964 ], 00:12:58.964 "driver_specific": {} 00:12:58.964 } 00:12:58.964 ] 00:12:58.964 18:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.964 18:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:58.964 18:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:58.964 18:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:58.964 18:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:58.964 18:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.964 18:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.964 BaseBdev3 00:12:58.964 18:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.964 18:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:58.964 18:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:58.964 18:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:58.964 18:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:58.964 18:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:58.964 18:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:58.964 18:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:58.964 18:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.964 18:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.964 18:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.964 18:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:58.964 18:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.964 18:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.964 [ 00:12:58.964 { 00:12:58.964 "name": "BaseBdev3", 00:12:58.964 "aliases": [ 00:12:58.964 "670425a8-2f40-456c-a78c-c8a57f3cb2fb" 00:12:58.964 ], 00:12:58.964 "product_name": "Malloc disk", 00:12:58.964 "block_size": 512, 00:12:58.964 "num_blocks": 65536, 00:12:58.964 "uuid": "670425a8-2f40-456c-a78c-c8a57f3cb2fb", 00:12:58.964 "assigned_rate_limits": { 00:12:58.964 "rw_ios_per_sec": 0, 00:12:58.964 "rw_mbytes_per_sec": 0, 00:12:58.964 "r_mbytes_per_sec": 0, 00:12:58.964 "w_mbytes_per_sec": 0 00:12:58.964 }, 00:12:58.964 "claimed": false, 00:12:58.964 "zoned": false, 00:12:58.964 "supported_io_types": { 00:12:58.964 "read": true, 00:12:58.964 "write": true, 00:12:58.964 "unmap": true, 00:12:58.964 "flush": true, 00:12:58.964 "reset": true, 00:12:58.964 "nvme_admin": false, 00:12:58.964 "nvme_io": false, 00:12:58.964 "nvme_io_md": false, 00:12:58.964 "write_zeroes": true, 00:12:58.964 "zcopy": true, 00:12:58.964 "get_zone_info": false, 00:12:58.964 "zone_management": false, 00:12:58.964 "zone_append": false, 00:12:58.964 "compare": false, 00:12:58.964 "compare_and_write": false, 00:12:58.964 "abort": true, 00:12:58.964 "seek_hole": false, 00:12:58.964 "seek_data": false, 00:12:58.964 "copy": true, 00:12:58.964 "nvme_iov_md": false 00:12:58.964 }, 00:12:58.964 "memory_domains": [ 00:12:58.964 { 00:12:58.964 "dma_device_id": "system", 00:12:58.964 "dma_device_type": 1 00:12:58.964 }, 00:12:58.964 { 00:12:58.964 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:58.964 "dma_device_type": 2 00:12:58.964 } 00:12:58.964 ], 00:12:58.964 "driver_specific": {} 00:12:58.964 } 00:12:58.964 ] 00:12:58.964 18:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.964 18:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:58.964 18:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:58.964 18:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:58.964 18:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:58.964 18:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.964 18:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.964 [2024-12-15 18:43:59.367443] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:58.964 [2024-12-15 18:43:59.367531] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:58.964 [2024-12-15 18:43:59.367573] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:58.964 [2024-12-15 18:43:59.369349] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:58.964 18:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.964 18:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:12:58.964 18:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:58.964 18:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:58.964 18:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:58.964 18:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:58.964 18:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:58.964 18:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:58.964 18:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:58.964 18:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:58.964 18:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:58.964 18:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.964 18:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.964 18:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:58.964 18:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.964 18:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.224 18:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:59.224 "name": "Existed_Raid", 00:12:59.224 "uuid": "6ca9a86b-e359-4493-9a96-5850e2ff5eab", 00:12:59.224 "strip_size_kb": 64, 00:12:59.224 "state": "configuring", 00:12:59.224 "raid_level": "raid5f", 00:12:59.224 "superblock": true, 00:12:59.224 "num_base_bdevs": 3, 00:12:59.224 "num_base_bdevs_discovered": 2, 00:12:59.224 "num_base_bdevs_operational": 3, 00:12:59.224 "base_bdevs_list": [ 00:12:59.224 { 00:12:59.224 "name": "BaseBdev1", 00:12:59.224 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:59.224 "is_configured": false, 00:12:59.224 "data_offset": 0, 00:12:59.224 "data_size": 0 00:12:59.224 }, 00:12:59.224 { 00:12:59.224 "name": "BaseBdev2", 00:12:59.224 "uuid": "cc1f782b-664c-4143-9297-38388b5a6c85", 00:12:59.224 "is_configured": true, 00:12:59.224 "data_offset": 2048, 00:12:59.224 "data_size": 63488 00:12:59.224 }, 00:12:59.224 { 00:12:59.224 "name": "BaseBdev3", 00:12:59.224 "uuid": "670425a8-2f40-456c-a78c-c8a57f3cb2fb", 00:12:59.224 "is_configured": true, 00:12:59.224 "data_offset": 2048, 00:12:59.224 "data_size": 63488 00:12:59.224 } 00:12:59.224 ] 00:12:59.224 }' 00:12:59.224 18:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:59.224 18:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.484 18:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:59.484 18:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.484 18:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.484 [2024-12-15 18:43:59.878581] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:59.484 18:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.484 18:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:12:59.484 18:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:59.484 18:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:59.484 18:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:59.484 18:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:59.484 18:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:59.484 18:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:59.484 18:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:59.484 18:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:59.484 18:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:59.484 18:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:59.484 18:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.484 18:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.484 18:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:59.484 18:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.743 18:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:59.743 "name": "Existed_Raid", 00:12:59.743 "uuid": "6ca9a86b-e359-4493-9a96-5850e2ff5eab", 00:12:59.743 "strip_size_kb": 64, 00:12:59.743 "state": "configuring", 00:12:59.743 "raid_level": "raid5f", 00:12:59.743 "superblock": true, 00:12:59.743 "num_base_bdevs": 3, 00:12:59.743 "num_base_bdevs_discovered": 1, 00:12:59.743 "num_base_bdevs_operational": 3, 00:12:59.743 "base_bdevs_list": [ 00:12:59.743 { 00:12:59.743 "name": "BaseBdev1", 00:12:59.743 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:59.743 "is_configured": false, 00:12:59.743 "data_offset": 0, 00:12:59.743 "data_size": 0 00:12:59.743 }, 00:12:59.743 { 00:12:59.743 "name": null, 00:12:59.743 "uuid": "cc1f782b-664c-4143-9297-38388b5a6c85", 00:12:59.743 "is_configured": false, 00:12:59.743 "data_offset": 0, 00:12:59.743 "data_size": 63488 00:12:59.743 }, 00:12:59.743 { 00:12:59.743 "name": "BaseBdev3", 00:12:59.743 "uuid": "670425a8-2f40-456c-a78c-c8a57f3cb2fb", 00:12:59.743 "is_configured": true, 00:12:59.743 "data_offset": 2048, 00:12:59.743 "data_size": 63488 00:12:59.743 } 00:12:59.743 ] 00:12:59.743 }' 00:12:59.743 18:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:59.743 18:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.004 18:44:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:00.004 18:44:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.004 18:44:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.004 18:44:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:00.004 18:44:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.004 18:44:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:13:00.004 18:44:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:00.004 18:44:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.004 18:44:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.004 [2024-12-15 18:44:00.396613] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:00.004 BaseBdev1 00:13:00.004 18:44:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.004 18:44:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:13:00.004 18:44:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:00.004 18:44:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:00.004 18:44:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:00.004 18:44:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:00.004 18:44:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:00.004 18:44:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:00.004 18:44:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.004 18:44:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.004 18:44:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.004 18:44:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:00.004 18:44:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.004 18:44:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.004 [ 00:13:00.004 { 00:13:00.004 "name": "BaseBdev1", 00:13:00.004 "aliases": [ 00:13:00.004 "8c870437-be03-4e96-97f1-2d97842cf0f1" 00:13:00.004 ], 00:13:00.004 "product_name": "Malloc disk", 00:13:00.004 "block_size": 512, 00:13:00.004 "num_blocks": 65536, 00:13:00.004 "uuid": "8c870437-be03-4e96-97f1-2d97842cf0f1", 00:13:00.004 "assigned_rate_limits": { 00:13:00.004 "rw_ios_per_sec": 0, 00:13:00.004 "rw_mbytes_per_sec": 0, 00:13:00.004 "r_mbytes_per_sec": 0, 00:13:00.004 "w_mbytes_per_sec": 0 00:13:00.004 }, 00:13:00.004 "claimed": true, 00:13:00.004 "claim_type": "exclusive_write", 00:13:00.004 "zoned": false, 00:13:00.004 "supported_io_types": { 00:13:00.004 "read": true, 00:13:00.004 "write": true, 00:13:00.004 "unmap": true, 00:13:00.004 "flush": true, 00:13:00.004 "reset": true, 00:13:00.004 "nvme_admin": false, 00:13:00.004 "nvme_io": false, 00:13:00.004 "nvme_io_md": false, 00:13:00.004 "write_zeroes": true, 00:13:00.004 "zcopy": true, 00:13:00.004 "get_zone_info": false, 00:13:00.004 "zone_management": false, 00:13:00.004 "zone_append": false, 00:13:00.004 "compare": false, 00:13:00.004 "compare_and_write": false, 00:13:00.004 "abort": true, 00:13:00.004 "seek_hole": false, 00:13:00.004 "seek_data": false, 00:13:00.004 "copy": true, 00:13:00.004 "nvme_iov_md": false 00:13:00.004 }, 00:13:00.004 "memory_domains": [ 00:13:00.004 { 00:13:00.004 "dma_device_id": "system", 00:13:00.004 "dma_device_type": 1 00:13:00.004 }, 00:13:00.004 { 00:13:00.004 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:00.004 "dma_device_type": 2 00:13:00.004 } 00:13:00.004 ], 00:13:00.004 "driver_specific": {} 00:13:00.004 } 00:13:00.004 ] 00:13:00.004 18:44:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.004 18:44:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:00.004 18:44:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:00.004 18:44:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:00.004 18:44:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:00.004 18:44:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:00.004 18:44:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:00.004 18:44:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:00.004 18:44:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:00.004 18:44:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:00.004 18:44:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:00.004 18:44:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:00.263 18:44:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:00.263 18:44:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:00.263 18:44:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.264 18:44:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.264 18:44:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.264 18:44:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:00.264 "name": "Existed_Raid", 00:13:00.264 "uuid": "6ca9a86b-e359-4493-9a96-5850e2ff5eab", 00:13:00.264 "strip_size_kb": 64, 00:13:00.264 "state": "configuring", 00:13:00.264 "raid_level": "raid5f", 00:13:00.264 "superblock": true, 00:13:00.264 "num_base_bdevs": 3, 00:13:00.264 "num_base_bdevs_discovered": 2, 00:13:00.264 "num_base_bdevs_operational": 3, 00:13:00.264 "base_bdevs_list": [ 00:13:00.264 { 00:13:00.264 "name": "BaseBdev1", 00:13:00.264 "uuid": "8c870437-be03-4e96-97f1-2d97842cf0f1", 00:13:00.264 "is_configured": true, 00:13:00.264 "data_offset": 2048, 00:13:00.264 "data_size": 63488 00:13:00.264 }, 00:13:00.264 { 00:13:00.264 "name": null, 00:13:00.264 "uuid": "cc1f782b-664c-4143-9297-38388b5a6c85", 00:13:00.264 "is_configured": false, 00:13:00.264 "data_offset": 0, 00:13:00.264 "data_size": 63488 00:13:00.264 }, 00:13:00.264 { 00:13:00.264 "name": "BaseBdev3", 00:13:00.264 "uuid": "670425a8-2f40-456c-a78c-c8a57f3cb2fb", 00:13:00.264 "is_configured": true, 00:13:00.264 "data_offset": 2048, 00:13:00.264 "data_size": 63488 00:13:00.264 } 00:13:00.264 ] 00:13:00.264 }' 00:13:00.264 18:44:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:00.264 18:44:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.523 18:44:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:00.523 18:44:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:00.523 18:44:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.523 18:44:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.523 18:44:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.523 18:44:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:13:00.523 18:44:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:13:00.523 18:44:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.523 18:44:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.523 [2024-12-15 18:44:00.907773] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:00.523 18:44:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.523 18:44:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:00.523 18:44:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:00.523 18:44:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:00.523 18:44:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:00.523 18:44:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:00.523 18:44:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:00.523 18:44:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:00.523 18:44:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:00.523 18:44:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:00.523 18:44:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:00.523 18:44:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:00.523 18:44:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:00.523 18:44:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.523 18:44:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.523 18:44:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.782 18:44:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:00.782 "name": "Existed_Raid", 00:13:00.782 "uuid": "6ca9a86b-e359-4493-9a96-5850e2ff5eab", 00:13:00.782 "strip_size_kb": 64, 00:13:00.782 "state": "configuring", 00:13:00.782 "raid_level": "raid5f", 00:13:00.782 "superblock": true, 00:13:00.782 "num_base_bdevs": 3, 00:13:00.782 "num_base_bdevs_discovered": 1, 00:13:00.782 "num_base_bdevs_operational": 3, 00:13:00.782 "base_bdevs_list": [ 00:13:00.782 { 00:13:00.782 "name": "BaseBdev1", 00:13:00.782 "uuid": "8c870437-be03-4e96-97f1-2d97842cf0f1", 00:13:00.782 "is_configured": true, 00:13:00.782 "data_offset": 2048, 00:13:00.782 "data_size": 63488 00:13:00.782 }, 00:13:00.782 { 00:13:00.782 "name": null, 00:13:00.782 "uuid": "cc1f782b-664c-4143-9297-38388b5a6c85", 00:13:00.782 "is_configured": false, 00:13:00.782 "data_offset": 0, 00:13:00.782 "data_size": 63488 00:13:00.782 }, 00:13:00.782 { 00:13:00.782 "name": null, 00:13:00.782 "uuid": "670425a8-2f40-456c-a78c-c8a57f3cb2fb", 00:13:00.782 "is_configured": false, 00:13:00.782 "data_offset": 0, 00:13:00.782 "data_size": 63488 00:13:00.782 } 00:13:00.782 ] 00:13:00.782 }' 00:13:00.782 18:44:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:00.782 18:44:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.042 18:44:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.042 18:44:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.042 18:44:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.042 18:44:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:01.042 18:44:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.042 18:44:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:13:01.042 18:44:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:01.042 18:44:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.042 18:44:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.042 [2024-12-15 18:44:01.426886] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:01.042 18:44:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.042 18:44:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:01.042 18:44:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:01.042 18:44:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:01.042 18:44:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:01.042 18:44:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:01.042 18:44:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:01.042 18:44:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:01.042 18:44:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:01.042 18:44:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:01.042 18:44:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:01.042 18:44:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.042 18:44:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.042 18:44:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.042 18:44:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:01.042 18:44:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.302 18:44:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:01.302 "name": "Existed_Raid", 00:13:01.302 "uuid": "6ca9a86b-e359-4493-9a96-5850e2ff5eab", 00:13:01.302 "strip_size_kb": 64, 00:13:01.302 "state": "configuring", 00:13:01.302 "raid_level": "raid5f", 00:13:01.302 "superblock": true, 00:13:01.302 "num_base_bdevs": 3, 00:13:01.302 "num_base_bdevs_discovered": 2, 00:13:01.302 "num_base_bdevs_operational": 3, 00:13:01.302 "base_bdevs_list": [ 00:13:01.302 { 00:13:01.302 "name": "BaseBdev1", 00:13:01.302 "uuid": "8c870437-be03-4e96-97f1-2d97842cf0f1", 00:13:01.302 "is_configured": true, 00:13:01.302 "data_offset": 2048, 00:13:01.302 "data_size": 63488 00:13:01.302 }, 00:13:01.302 { 00:13:01.302 "name": null, 00:13:01.302 "uuid": "cc1f782b-664c-4143-9297-38388b5a6c85", 00:13:01.302 "is_configured": false, 00:13:01.302 "data_offset": 0, 00:13:01.302 "data_size": 63488 00:13:01.302 }, 00:13:01.302 { 00:13:01.302 "name": "BaseBdev3", 00:13:01.302 "uuid": "670425a8-2f40-456c-a78c-c8a57f3cb2fb", 00:13:01.302 "is_configured": true, 00:13:01.302 "data_offset": 2048, 00:13:01.302 "data_size": 63488 00:13:01.302 } 00:13:01.302 ] 00:13:01.302 }' 00:13:01.302 18:44:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:01.302 18:44:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.562 18:44:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.562 18:44:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.562 18:44:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.562 18:44:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:01.562 18:44:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.562 18:44:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:13:01.562 18:44:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:01.562 18:44:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.562 18:44:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.562 [2024-12-15 18:44:01.962017] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:01.562 18:44:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.562 18:44:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:01.562 18:44:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:01.562 18:44:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:01.562 18:44:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:01.562 18:44:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:01.562 18:44:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:01.562 18:44:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:01.562 18:44:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:01.562 18:44:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:01.562 18:44:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:01.562 18:44:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.562 18:44:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.562 18:44:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.562 18:44:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:01.562 18:44:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.821 18:44:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:01.821 "name": "Existed_Raid", 00:13:01.821 "uuid": "6ca9a86b-e359-4493-9a96-5850e2ff5eab", 00:13:01.821 "strip_size_kb": 64, 00:13:01.821 "state": "configuring", 00:13:01.821 "raid_level": "raid5f", 00:13:01.821 "superblock": true, 00:13:01.821 "num_base_bdevs": 3, 00:13:01.821 "num_base_bdevs_discovered": 1, 00:13:01.821 "num_base_bdevs_operational": 3, 00:13:01.821 "base_bdevs_list": [ 00:13:01.821 { 00:13:01.821 "name": null, 00:13:01.821 "uuid": "8c870437-be03-4e96-97f1-2d97842cf0f1", 00:13:01.821 "is_configured": false, 00:13:01.821 "data_offset": 0, 00:13:01.821 "data_size": 63488 00:13:01.821 }, 00:13:01.821 { 00:13:01.821 "name": null, 00:13:01.821 "uuid": "cc1f782b-664c-4143-9297-38388b5a6c85", 00:13:01.821 "is_configured": false, 00:13:01.821 "data_offset": 0, 00:13:01.821 "data_size": 63488 00:13:01.821 }, 00:13:01.821 { 00:13:01.821 "name": "BaseBdev3", 00:13:01.821 "uuid": "670425a8-2f40-456c-a78c-c8a57f3cb2fb", 00:13:01.821 "is_configured": true, 00:13:01.821 "data_offset": 2048, 00:13:01.821 "data_size": 63488 00:13:01.821 } 00:13:01.821 ] 00:13:01.821 }' 00:13:01.821 18:44:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:01.821 18:44:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.082 18:44:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:02.082 18:44:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:02.082 18:44:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.082 18:44:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.082 18:44:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.082 18:44:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:13:02.082 18:44:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:02.082 18:44:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.082 18:44:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.082 [2024-12-15 18:44:02.455820] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:02.082 18:44:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.082 18:44:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:02.082 18:44:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:02.082 18:44:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:02.082 18:44:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:02.082 18:44:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:02.082 18:44:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:02.082 18:44:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:02.082 18:44:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:02.082 18:44:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:02.082 18:44:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:02.082 18:44:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:02.082 18:44:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:02.082 18:44:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.082 18:44:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.082 18:44:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.082 18:44:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:02.082 "name": "Existed_Raid", 00:13:02.082 "uuid": "6ca9a86b-e359-4493-9a96-5850e2ff5eab", 00:13:02.082 "strip_size_kb": 64, 00:13:02.082 "state": "configuring", 00:13:02.082 "raid_level": "raid5f", 00:13:02.082 "superblock": true, 00:13:02.082 "num_base_bdevs": 3, 00:13:02.082 "num_base_bdevs_discovered": 2, 00:13:02.082 "num_base_bdevs_operational": 3, 00:13:02.082 "base_bdevs_list": [ 00:13:02.082 { 00:13:02.082 "name": null, 00:13:02.082 "uuid": "8c870437-be03-4e96-97f1-2d97842cf0f1", 00:13:02.082 "is_configured": false, 00:13:02.082 "data_offset": 0, 00:13:02.082 "data_size": 63488 00:13:02.082 }, 00:13:02.082 { 00:13:02.082 "name": "BaseBdev2", 00:13:02.082 "uuid": "cc1f782b-664c-4143-9297-38388b5a6c85", 00:13:02.082 "is_configured": true, 00:13:02.082 "data_offset": 2048, 00:13:02.082 "data_size": 63488 00:13:02.082 }, 00:13:02.082 { 00:13:02.082 "name": "BaseBdev3", 00:13:02.082 "uuid": "670425a8-2f40-456c-a78c-c8a57f3cb2fb", 00:13:02.082 "is_configured": true, 00:13:02.082 "data_offset": 2048, 00:13:02.082 "data_size": 63488 00:13:02.082 } 00:13:02.082 ] 00:13:02.082 }' 00:13:02.082 18:44:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:02.082 18:44:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.652 18:44:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:02.652 18:44:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:02.652 18:44:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.652 18:44:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.652 18:44:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.652 18:44:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:13:02.652 18:44:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:02.652 18:44:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:02.652 18:44:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.652 18:44:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.652 18:44:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.652 18:44:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 8c870437-be03-4e96-97f1-2d97842cf0f1 00:13:02.652 18:44:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.652 18:44:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.652 [2024-12-15 18:44:03.013789] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:02.652 [2024-12-15 18:44:03.014052] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:13:02.652 [2024-12-15 18:44:03.014104] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:02.652 [2024-12-15 18:44:03.014388] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:13:02.652 NewBaseBdev 00:13:02.652 [2024-12-15 18:44:03.014827] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:13:02.652 [2024-12-15 18:44:03.014878] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:13:02.652 [2024-12-15 18:44:03.015019] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:02.652 18:44:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.652 18:44:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:13:02.652 18:44:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:13:02.652 18:44:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:02.652 18:44:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:02.652 18:44:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:02.652 18:44:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:02.652 18:44:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:02.652 18:44:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.652 18:44:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.652 18:44:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.652 18:44:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:02.652 18:44:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.652 18:44:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.652 [ 00:13:02.652 { 00:13:02.652 "name": "NewBaseBdev", 00:13:02.652 "aliases": [ 00:13:02.652 "8c870437-be03-4e96-97f1-2d97842cf0f1" 00:13:02.652 ], 00:13:02.652 "product_name": "Malloc disk", 00:13:02.652 "block_size": 512, 00:13:02.652 "num_blocks": 65536, 00:13:02.652 "uuid": "8c870437-be03-4e96-97f1-2d97842cf0f1", 00:13:02.652 "assigned_rate_limits": { 00:13:02.652 "rw_ios_per_sec": 0, 00:13:02.652 "rw_mbytes_per_sec": 0, 00:13:02.652 "r_mbytes_per_sec": 0, 00:13:02.652 "w_mbytes_per_sec": 0 00:13:02.652 }, 00:13:02.652 "claimed": true, 00:13:02.652 "claim_type": "exclusive_write", 00:13:02.652 "zoned": false, 00:13:02.652 "supported_io_types": { 00:13:02.652 "read": true, 00:13:02.652 "write": true, 00:13:02.652 "unmap": true, 00:13:02.652 "flush": true, 00:13:02.652 "reset": true, 00:13:02.652 "nvme_admin": false, 00:13:02.652 "nvme_io": false, 00:13:02.652 "nvme_io_md": false, 00:13:02.652 "write_zeroes": true, 00:13:02.652 "zcopy": true, 00:13:02.652 "get_zone_info": false, 00:13:02.652 "zone_management": false, 00:13:02.652 "zone_append": false, 00:13:02.652 "compare": false, 00:13:02.652 "compare_and_write": false, 00:13:02.652 "abort": true, 00:13:02.652 "seek_hole": false, 00:13:02.652 "seek_data": false, 00:13:02.652 "copy": true, 00:13:02.652 "nvme_iov_md": false 00:13:02.652 }, 00:13:02.652 "memory_domains": [ 00:13:02.652 { 00:13:02.652 "dma_device_id": "system", 00:13:02.652 "dma_device_type": 1 00:13:02.652 }, 00:13:02.652 { 00:13:02.652 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:02.652 "dma_device_type": 2 00:13:02.652 } 00:13:02.652 ], 00:13:02.652 "driver_specific": {} 00:13:02.652 } 00:13:02.652 ] 00:13:02.652 18:44:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.652 18:44:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:02.652 18:44:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:13:02.652 18:44:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:02.652 18:44:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:02.652 18:44:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:02.652 18:44:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:02.652 18:44:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:02.652 18:44:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:02.652 18:44:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:02.652 18:44:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:02.652 18:44:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:02.652 18:44:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:02.652 18:44:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:02.652 18:44:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.652 18:44:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.652 18:44:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.912 18:44:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:02.912 "name": "Existed_Raid", 00:13:02.912 "uuid": "6ca9a86b-e359-4493-9a96-5850e2ff5eab", 00:13:02.912 "strip_size_kb": 64, 00:13:02.912 "state": "online", 00:13:02.912 "raid_level": "raid5f", 00:13:02.912 "superblock": true, 00:13:02.912 "num_base_bdevs": 3, 00:13:02.912 "num_base_bdevs_discovered": 3, 00:13:02.912 "num_base_bdevs_operational": 3, 00:13:02.912 "base_bdevs_list": [ 00:13:02.912 { 00:13:02.912 "name": "NewBaseBdev", 00:13:02.912 "uuid": "8c870437-be03-4e96-97f1-2d97842cf0f1", 00:13:02.912 "is_configured": true, 00:13:02.912 "data_offset": 2048, 00:13:02.912 "data_size": 63488 00:13:02.912 }, 00:13:02.912 { 00:13:02.912 "name": "BaseBdev2", 00:13:02.912 "uuid": "cc1f782b-664c-4143-9297-38388b5a6c85", 00:13:02.912 "is_configured": true, 00:13:02.912 "data_offset": 2048, 00:13:02.912 "data_size": 63488 00:13:02.912 }, 00:13:02.912 { 00:13:02.912 "name": "BaseBdev3", 00:13:02.912 "uuid": "670425a8-2f40-456c-a78c-c8a57f3cb2fb", 00:13:02.912 "is_configured": true, 00:13:02.912 "data_offset": 2048, 00:13:02.912 "data_size": 63488 00:13:02.912 } 00:13:02.912 ] 00:13:02.912 }' 00:13:02.912 18:44:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:02.912 18:44:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.172 18:44:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:13:03.172 18:44:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:03.172 18:44:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:03.172 18:44:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:03.172 18:44:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:13:03.172 18:44:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:03.172 18:44:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:03.172 18:44:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:03.172 18:44:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.172 18:44:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.172 [2024-12-15 18:44:03.517137] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:03.172 18:44:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.172 18:44:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:03.172 "name": "Existed_Raid", 00:13:03.172 "aliases": [ 00:13:03.172 "6ca9a86b-e359-4493-9a96-5850e2ff5eab" 00:13:03.172 ], 00:13:03.172 "product_name": "Raid Volume", 00:13:03.172 "block_size": 512, 00:13:03.172 "num_blocks": 126976, 00:13:03.172 "uuid": "6ca9a86b-e359-4493-9a96-5850e2ff5eab", 00:13:03.172 "assigned_rate_limits": { 00:13:03.172 "rw_ios_per_sec": 0, 00:13:03.172 "rw_mbytes_per_sec": 0, 00:13:03.172 "r_mbytes_per_sec": 0, 00:13:03.172 "w_mbytes_per_sec": 0 00:13:03.172 }, 00:13:03.172 "claimed": false, 00:13:03.172 "zoned": false, 00:13:03.172 "supported_io_types": { 00:13:03.172 "read": true, 00:13:03.172 "write": true, 00:13:03.172 "unmap": false, 00:13:03.172 "flush": false, 00:13:03.172 "reset": true, 00:13:03.172 "nvme_admin": false, 00:13:03.172 "nvme_io": false, 00:13:03.172 "nvme_io_md": false, 00:13:03.172 "write_zeroes": true, 00:13:03.172 "zcopy": false, 00:13:03.172 "get_zone_info": false, 00:13:03.172 "zone_management": false, 00:13:03.172 "zone_append": false, 00:13:03.172 "compare": false, 00:13:03.172 "compare_and_write": false, 00:13:03.172 "abort": false, 00:13:03.172 "seek_hole": false, 00:13:03.172 "seek_data": false, 00:13:03.173 "copy": false, 00:13:03.173 "nvme_iov_md": false 00:13:03.173 }, 00:13:03.173 "driver_specific": { 00:13:03.173 "raid": { 00:13:03.173 "uuid": "6ca9a86b-e359-4493-9a96-5850e2ff5eab", 00:13:03.173 "strip_size_kb": 64, 00:13:03.173 "state": "online", 00:13:03.173 "raid_level": "raid5f", 00:13:03.173 "superblock": true, 00:13:03.173 "num_base_bdevs": 3, 00:13:03.173 "num_base_bdevs_discovered": 3, 00:13:03.173 "num_base_bdevs_operational": 3, 00:13:03.173 "base_bdevs_list": [ 00:13:03.173 { 00:13:03.173 "name": "NewBaseBdev", 00:13:03.173 "uuid": "8c870437-be03-4e96-97f1-2d97842cf0f1", 00:13:03.173 "is_configured": true, 00:13:03.173 "data_offset": 2048, 00:13:03.173 "data_size": 63488 00:13:03.173 }, 00:13:03.173 { 00:13:03.173 "name": "BaseBdev2", 00:13:03.173 "uuid": "cc1f782b-664c-4143-9297-38388b5a6c85", 00:13:03.173 "is_configured": true, 00:13:03.173 "data_offset": 2048, 00:13:03.173 "data_size": 63488 00:13:03.173 }, 00:13:03.173 { 00:13:03.173 "name": "BaseBdev3", 00:13:03.173 "uuid": "670425a8-2f40-456c-a78c-c8a57f3cb2fb", 00:13:03.173 "is_configured": true, 00:13:03.173 "data_offset": 2048, 00:13:03.173 "data_size": 63488 00:13:03.173 } 00:13:03.173 ] 00:13:03.173 } 00:13:03.173 } 00:13:03.173 }' 00:13:03.173 18:44:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:03.173 18:44:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:13:03.173 BaseBdev2 00:13:03.173 BaseBdev3' 00:13:03.173 18:44:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:03.433 18:44:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:03.433 18:44:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:03.433 18:44:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:13:03.433 18:44:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:03.433 18:44:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.433 18:44:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.433 18:44:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.433 18:44:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:03.433 18:44:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:03.433 18:44:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:03.433 18:44:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:03.433 18:44:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:03.433 18:44:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.433 18:44:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.433 18:44:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.433 18:44:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:03.433 18:44:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:03.433 18:44:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:03.433 18:44:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:03.433 18:44:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:03.433 18:44:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.433 18:44:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.433 18:44:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.433 18:44:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:03.433 18:44:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:03.433 18:44:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:03.433 18:44:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.433 18:44:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.433 [2024-12-15 18:44:03.788511] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:03.433 [2024-12-15 18:44:03.788584] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:03.433 [2024-12-15 18:44:03.788716] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:03.433 [2024-12-15 18:44:03.788988] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:03.433 [2024-12-15 18:44:03.789048] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:13:03.433 18:44:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.433 18:44:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 92987 00:13:03.433 18:44:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 92987 ']' 00:13:03.433 18:44:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 92987 00:13:03.433 18:44:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:13:03.433 18:44:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:03.433 18:44:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 92987 00:13:03.433 killing process with pid 92987 00:13:03.433 18:44:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:03.433 18:44:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:03.433 18:44:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 92987' 00:13:03.434 18:44:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 92987 00:13:03.434 [2024-12-15 18:44:03.834428] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:03.434 18:44:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 92987 00:13:03.434 [2024-12-15 18:44:03.865282] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:03.694 18:44:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:13:03.694 00:13:03.694 real 0m9.059s 00:13:03.694 user 0m15.432s 00:13:03.694 sys 0m1.995s 00:13:03.694 18:44:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:03.694 ************************************ 00:13:03.694 END TEST raid5f_state_function_test_sb 00:13:03.694 ************************************ 00:13:03.694 18:44:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.954 18:44:04 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:13:03.954 18:44:04 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:13:03.954 18:44:04 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:03.954 18:44:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:03.954 ************************************ 00:13:03.954 START TEST raid5f_superblock_test 00:13:03.954 ************************************ 00:13:03.954 18:44:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 3 00:13:03.954 18:44:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:13:03.954 18:44:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:13:03.954 18:44:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:13:03.954 18:44:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:13:03.954 18:44:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:13:03.954 18:44:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:13:03.954 18:44:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:13:03.954 18:44:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:13:03.954 18:44:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:13:03.954 18:44:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:13:03.954 18:44:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:13:03.954 18:44:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:13:03.954 18:44:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:13:03.954 18:44:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:13:03.954 18:44:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:13:03.954 18:44:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:13:03.954 18:44:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=93591 00:13:03.954 18:44:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:13:03.954 18:44:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 93591 00:13:03.954 18:44:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 93591 ']' 00:13:03.954 18:44:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:03.954 18:44:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:03.954 18:44:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:03.954 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:03.954 18:44:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:03.954 18:44:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.954 [2024-12-15 18:44:04.251966] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:13:03.954 [2024-12-15 18:44:04.252094] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93591 ] 00:13:04.214 [2024-12-15 18:44:04.420005] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:04.214 [2024-12-15 18:44:04.444680] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:13:04.214 [2024-12-15 18:44:04.486900] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:04.214 [2024-12-15 18:44:04.486939] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:04.786 18:44:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:04.787 18:44:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:13:04.787 18:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:13:04.787 18:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:04.787 18:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:13:04.787 18:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:13:04.787 18:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:13:04.787 18:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:04.787 18:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:04.787 18:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:04.787 18:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:13:04.787 18:44:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.787 18:44:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.787 malloc1 00:13:04.787 18:44:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.787 18:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:04.787 18:44:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.787 18:44:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.787 [2024-12-15 18:44:05.102290] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:04.787 [2024-12-15 18:44:05.102408] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:04.787 [2024-12-15 18:44:05.102448] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:04.787 [2024-12-15 18:44:05.102497] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:04.787 [2024-12-15 18:44:05.104586] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:04.787 [2024-12-15 18:44:05.104678] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:04.787 pt1 00:13:04.787 18:44:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.787 18:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:04.787 18:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:04.787 18:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:13:04.787 18:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:13:04.787 18:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:13:04.787 18:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:04.787 18:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:04.787 18:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:04.787 18:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:13:04.787 18:44:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.787 18:44:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.787 malloc2 00:13:04.787 18:44:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.787 18:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:04.787 18:44:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.787 18:44:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.787 [2024-12-15 18:44:05.130875] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:04.787 [2024-12-15 18:44:05.130964] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:04.787 [2024-12-15 18:44:05.130998] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:04.787 [2024-12-15 18:44:05.131027] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:04.787 [2024-12-15 18:44:05.133061] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:04.787 [2024-12-15 18:44:05.133135] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:04.787 pt2 00:13:04.787 18:44:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.787 18:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:04.787 18:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:04.787 18:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:13:04.787 18:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:13:04.787 18:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:13:04.787 18:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:04.787 18:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:04.787 18:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:04.787 18:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:13:04.787 18:44:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.787 18:44:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.787 malloc3 00:13:04.787 18:44:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.787 18:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:04.787 18:44:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.787 18:44:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.787 [2024-12-15 18:44:05.163526] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:04.787 [2024-12-15 18:44:05.163616] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:04.787 [2024-12-15 18:44:05.163655] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:04.788 [2024-12-15 18:44:05.163684] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:04.788 [2024-12-15 18:44:05.165733] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:04.788 [2024-12-15 18:44:05.165825] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:04.788 pt3 00:13:04.788 18:44:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.788 18:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:04.788 18:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:04.788 18:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:13:04.788 18:44:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.788 18:44:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.788 [2024-12-15 18:44:05.175553] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:04.788 [2024-12-15 18:44:05.177445] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:04.788 [2024-12-15 18:44:05.177541] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:04.788 [2024-12-15 18:44:05.177713] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:13:04.788 [2024-12-15 18:44:05.177763] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:04.788 [2024-12-15 18:44:05.178054] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:13:04.788 [2024-12-15 18:44:05.178493] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:13:04.788 [2024-12-15 18:44:05.178540] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:13:04.788 [2024-12-15 18:44:05.178688] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:04.788 18:44:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.788 18:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:13:04.788 18:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:04.788 18:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:04.788 18:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:04.788 18:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:04.788 18:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:04.788 18:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:04.788 18:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:04.788 18:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:04.788 18:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:04.788 18:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.788 18:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:04.788 18:44:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.788 18:44:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.788 18:44:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.049 18:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:05.049 "name": "raid_bdev1", 00:13:05.049 "uuid": "7b500cd0-e98e-4bf4-8c8c-fda6803ec18e", 00:13:05.049 "strip_size_kb": 64, 00:13:05.049 "state": "online", 00:13:05.049 "raid_level": "raid5f", 00:13:05.049 "superblock": true, 00:13:05.049 "num_base_bdevs": 3, 00:13:05.049 "num_base_bdevs_discovered": 3, 00:13:05.049 "num_base_bdevs_operational": 3, 00:13:05.049 "base_bdevs_list": [ 00:13:05.049 { 00:13:05.049 "name": "pt1", 00:13:05.049 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:05.049 "is_configured": true, 00:13:05.049 "data_offset": 2048, 00:13:05.049 "data_size": 63488 00:13:05.049 }, 00:13:05.049 { 00:13:05.049 "name": "pt2", 00:13:05.049 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:05.049 "is_configured": true, 00:13:05.049 "data_offset": 2048, 00:13:05.049 "data_size": 63488 00:13:05.049 }, 00:13:05.049 { 00:13:05.049 "name": "pt3", 00:13:05.049 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:05.049 "is_configured": true, 00:13:05.049 "data_offset": 2048, 00:13:05.049 "data_size": 63488 00:13:05.049 } 00:13:05.049 ] 00:13:05.049 }' 00:13:05.049 18:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:05.049 18:44:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.308 18:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:13:05.308 18:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:13:05.308 18:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:05.308 18:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:05.308 18:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:05.308 18:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:05.308 18:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:05.308 18:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:05.308 18:44:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.308 18:44:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.308 [2024-12-15 18:44:05.579626] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:05.308 18:44:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.309 18:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:05.309 "name": "raid_bdev1", 00:13:05.309 "aliases": [ 00:13:05.309 "7b500cd0-e98e-4bf4-8c8c-fda6803ec18e" 00:13:05.309 ], 00:13:05.309 "product_name": "Raid Volume", 00:13:05.309 "block_size": 512, 00:13:05.309 "num_blocks": 126976, 00:13:05.309 "uuid": "7b500cd0-e98e-4bf4-8c8c-fda6803ec18e", 00:13:05.309 "assigned_rate_limits": { 00:13:05.309 "rw_ios_per_sec": 0, 00:13:05.309 "rw_mbytes_per_sec": 0, 00:13:05.309 "r_mbytes_per_sec": 0, 00:13:05.309 "w_mbytes_per_sec": 0 00:13:05.309 }, 00:13:05.309 "claimed": false, 00:13:05.309 "zoned": false, 00:13:05.309 "supported_io_types": { 00:13:05.309 "read": true, 00:13:05.309 "write": true, 00:13:05.309 "unmap": false, 00:13:05.309 "flush": false, 00:13:05.309 "reset": true, 00:13:05.309 "nvme_admin": false, 00:13:05.309 "nvme_io": false, 00:13:05.309 "nvme_io_md": false, 00:13:05.309 "write_zeroes": true, 00:13:05.309 "zcopy": false, 00:13:05.309 "get_zone_info": false, 00:13:05.309 "zone_management": false, 00:13:05.309 "zone_append": false, 00:13:05.309 "compare": false, 00:13:05.309 "compare_and_write": false, 00:13:05.309 "abort": false, 00:13:05.309 "seek_hole": false, 00:13:05.309 "seek_data": false, 00:13:05.309 "copy": false, 00:13:05.309 "nvme_iov_md": false 00:13:05.309 }, 00:13:05.309 "driver_specific": { 00:13:05.309 "raid": { 00:13:05.309 "uuid": "7b500cd0-e98e-4bf4-8c8c-fda6803ec18e", 00:13:05.309 "strip_size_kb": 64, 00:13:05.309 "state": "online", 00:13:05.309 "raid_level": "raid5f", 00:13:05.309 "superblock": true, 00:13:05.309 "num_base_bdevs": 3, 00:13:05.309 "num_base_bdevs_discovered": 3, 00:13:05.309 "num_base_bdevs_operational": 3, 00:13:05.309 "base_bdevs_list": [ 00:13:05.309 { 00:13:05.309 "name": "pt1", 00:13:05.309 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:05.309 "is_configured": true, 00:13:05.309 "data_offset": 2048, 00:13:05.309 "data_size": 63488 00:13:05.309 }, 00:13:05.309 { 00:13:05.309 "name": "pt2", 00:13:05.309 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:05.309 "is_configured": true, 00:13:05.309 "data_offset": 2048, 00:13:05.309 "data_size": 63488 00:13:05.309 }, 00:13:05.309 { 00:13:05.309 "name": "pt3", 00:13:05.309 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:05.309 "is_configured": true, 00:13:05.309 "data_offset": 2048, 00:13:05.309 "data_size": 63488 00:13:05.309 } 00:13:05.309 ] 00:13:05.309 } 00:13:05.309 } 00:13:05.309 }' 00:13:05.309 18:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:05.309 18:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:13:05.309 pt2 00:13:05.309 pt3' 00:13:05.309 18:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:05.309 18:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:05.309 18:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:05.309 18:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:13:05.309 18:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:05.309 18:44:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.309 18:44:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.309 18:44:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.309 18:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:05.309 18:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:05.309 18:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:05.309 18:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:05.309 18:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:13:05.309 18:44:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.309 18:44:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.569 18:44:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.569 18:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:05.569 18:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:05.569 18:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:05.569 18:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:13:05.569 18:44:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.569 18:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:05.569 18:44:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.569 18:44:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.569 18:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:05.569 18:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:05.569 18:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:13:05.569 18:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:05.569 18:44:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.569 18:44:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.569 [2024-12-15 18:44:05.827117] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:05.569 18:44:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.569 18:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=7b500cd0-e98e-4bf4-8c8c-fda6803ec18e 00:13:05.570 18:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 7b500cd0-e98e-4bf4-8c8c-fda6803ec18e ']' 00:13:05.570 18:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:05.570 18:44:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.570 18:44:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.570 [2024-12-15 18:44:05.854915] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:05.570 [2024-12-15 18:44:05.854972] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:05.570 [2024-12-15 18:44:05.855065] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:05.570 [2024-12-15 18:44:05.855151] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:05.570 [2024-12-15 18:44:05.855210] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:13:05.570 18:44:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.570 18:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:13:05.570 18:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:05.570 18:44:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.570 18:44:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.570 18:44:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.570 18:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:13:05.570 18:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:13:05.570 18:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:05.570 18:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:13:05.570 18:44:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.570 18:44:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.570 18:44:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.570 18:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:05.570 18:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:13:05.570 18:44:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.570 18:44:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.570 18:44:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.570 18:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:05.570 18:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:13:05.570 18:44:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.570 18:44:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.570 18:44:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.570 18:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:13:05.570 18:44:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.570 18:44:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.570 18:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:13:05.570 18:44:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.570 18:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:13:05.570 18:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:13:05.570 18:44:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:13:05.570 18:44:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:13:05.570 18:44:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:13:05.570 18:44:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:05.570 18:44:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:13:05.570 18:44:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:05.570 18:44:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:13:05.570 18:44:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.570 18:44:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.570 [2024-12-15 18:44:06.006678] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:13:05.570 [2024-12-15 18:44:06.008572] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:13:05.570 [2024-12-15 18:44:06.008666] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:13:05.570 [2024-12-15 18:44:06.008734] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:13:05.570 [2024-12-15 18:44:06.008859] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:13:05.570 [2024-12-15 18:44:06.008915] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:13:05.570 [2024-12-15 18:44:06.008967] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:05.570 [2024-12-15 18:44:06.009036] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:13:05.830 request: 00:13:05.830 { 00:13:05.830 "name": "raid_bdev1", 00:13:05.830 "raid_level": "raid5f", 00:13:05.830 "base_bdevs": [ 00:13:05.830 "malloc1", 00:13:05.830 "malloc2", 00:13:05.830 "malloc3" 00:13:05.830 ], 00:13:05.830 "strip_size_kb": 64, 00:13:05.830 "superblock": false, 00:13:05.830 "method": "bdev_raid_create", 00:13:05.830 "req_id": 1 00:13:05.830 } 00:13:05.830 Got JSON-RPC error response 00:13:05.830 response: 00:13:05.830 { 00:13:05.830 "code": -17, 00:13:05.830 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:13:05.830 } 00:13:05.830 18:44:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:13:05.830 18:44:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:13:05.830 18:44:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:05.830 18:44:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:05.830 18:44:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:05.830 18:44:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:05.830 18:44:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.830 18:44:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.830 18:44:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:13:05.830 18:44:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.830 18:44:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:13:05.830 18:44:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:13:05.830 18:44:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:05.830 18:44:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.830 18:44:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.830 [2024-12-15 18:44:06.070550] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:05.830 [2024-12-15 18:44:06.070631] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:05.830 [2024-12-15 18:44:06.070664] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:05.830 [2024-12-15 18:44:06.070693] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:05.830 [2024-12-15 18:44:06.072764] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:05.830 [2024-12-15 18:44:06.072855] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:05.830 [2024-12-15 18:44:06.072942] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:13:05.830 [2024-12-15 18:44:06.073010] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:05.830 pt1 00:13:05.830 18:44:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.830 18:44:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:13:05.830 18:44:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:05.830 18:44:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:05.830 18:44:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:05.830 18:44:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:05.830 18:44:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:05.830 18:44:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:05.830 18:44:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:05.830 18:44:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:05.830 18:44:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:05.830 18:44:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:05.830 18:44:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:05.830 18:44:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.830 18:44:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.830 18:44:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.830 18:44:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:05.830 "name": "raid_bdev1", 00:13:05.831 "uuid": "7b500cd0-e98e-4bf4-8c8c-fda6803ec18e", 00:13:05.831 "strip_size_kb": 64, 00:13:05.831 "state": "configuring", 00:13:05.831 "raid_level": "raid5f", 00:13:05.831 "superblock": true, 00:13:05.831 "num_base_bdevs": 3, 00:13:05.831 "num_base_bdevs_discovered": 1, 00:13:05.831 "num_base_bdevs_operational": 3, 00:13:05.831 "base_bdevs_list": [ 00:13:05.831 { 00:13:05.831 "name": "pt1", 00:13:05.831 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:05.831 "is_configured": true, 00:13:05.831 "data_offset": 2048, 00:13:05.831 "data_size": 63488 00:13:05.831 }, 00:13:05.831 { 00:13:05.831 "name": null, 00:13:05.831 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:05.831 "is_configured": false, 00:13:05.831 "data_offset": 2048, 00:13:05.831 "data_size": 63488 00:13:05.831 }, 00:13:05.831 { 00:13:05.831 "name": null, 00:13:05.831 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:05.831 "is_configured": false, 00:13:05.831 "data_offset": 2048, 00:13:05.831 "data_size": 63488 00:13:05.831 } 00:13:05.831 ] 00:13:05.831 }' 00:13:05.831 18:44:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:05.831 18:44:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.091 18:44:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:13:06.091 18:44:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:06.091 18:44:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.091 18:44:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.091 [2024-12-15 18:44:06.501919] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:06.091 [2024-12-15 18:44:06.502008] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:06.091 [2024-12-15 18:44:06.502044] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:13:06.091 [2024-12-15 18:44:06.502075] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:06.091 [2024-12-15 18:44:06.502451] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:06.091 [2024-12-15 18:44:06.502510] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:06.091 [2024-12-15 18:44:06.502605] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:06.091 [2024-12-15 18:44:06.502662] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:06.091 pt2 00:13:06.091 18:44:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.091 18:44:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:13:06.091 18:44:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.091 18:44:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.091 [2024-12-15 18:44:06.513914] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:13:06.091 18:44:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.091 18:44:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:13:06.091 18:44:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:06.091 18:44:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:06.091 18:44:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:06.091 18:44:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:06.091 18:44:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:06.091 18:44:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:06.091 18:44:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:06.091 18:44:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:06.091 18:44:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:06.091 18:44:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.091 18:44:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:06.091 18:44:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.091 18:44:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.351 18:44:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.351 18:44:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:06.351 "name": "raid_bdev1", 00:13:06.351 "uuid": "7b500cd0-e98e-4bf4-8c8c-fda6803ec18e", 00:13:06.351 "strip_size_kb": 64, 00:13:06.351 "state": "configuring", 00:13:06.351 "raid_level": "raid5f", 00:13:06.351 "superblock": true, 00:13:06.351 "num_base_bdevs": 3, 00:13:06.351 "num_base_bdevs_discovered": 1, 00:13:06.351 "num_base_bdevs_operational": 3, 00:13:06.351 "base_bdevs_list": [ 00:13:06.351 { 00:13:06.351 "name": "pt1", 00:13:06.351 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:06.351 "is_configured": true, 00:13:06.351 "data_offset": 2048, 00:13:06.351 "data_size": 63488 00:13:06.351 }, 00:13:06.351 { 00:13:06.351 "name": null, 00:13:06.351 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:06.351 "is_configured": false, 00:13:06.351 "data_offset": 0, 00:13:06.351 "data_size": 63488 00:13:06.351 }, 00:13:06.351 { 00:13:06.351 "name": null, 00:13:06.351 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:06.351 "is_configured": false, 00:13:06.351 "data_offset": 2048, 00:13:06.351 "data_size": 63488 00:13:06.351 } 00:13:06.351 ] 00:13:06.351 }' 00:13:06.351 18:44:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:06.351 18:44:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.611 18:44:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:13:06.611 18:44:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:06.611 18:44:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:06.611 18:44:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.611 18:44:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.611 [2024-12-15 18:44:06.941216] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:06.611 [2024-12-15 18:44:06.941311] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:06.611 [2024-12-15 18:44:06.941358] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:13:06.611 [2024-12-15 18:44:06.941386] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:06.611 [2024-12-15 18:44:06.941786] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:06.611 [2024-12-15 18:44:06.941857] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:06.611 [2024-12-15 18:44:06.941959] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:06.611 [2024-12-15 18:44:06.942008] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:06.611 pt2 00:13:06.611 18:44:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.611 18:44:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:06.611 18:44:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:06.611 18:44:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:06.611 18:44:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.611 18:44:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.611 [2024-12-15 18:44:06.953176] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:06.611 [2024-12-15 18:44:06.953250] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:06.611 [2024-12-15 18:44:06.953290] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:06.611 [2024-12-15 18:44:06.953320] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:06.611 [2024-12-15 18:44:06.953633] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:06.611 [2024-12-15 18:44:06.953686] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:06.611 [2024-12-15 18:44:06.953764] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:13:06.611 [2024-12-15 18:44:06.953828] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:06.611 [2024-12-15 18:44:06.953985] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:13:06.611 [2024-12-15 18:44:06.954024] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:06.611 [2024-12-15 18:44:06.954244] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:13:06.611 [2024-12-15 18:44:06.954657] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:13:06.611 [2024-12-15 18:44:06.954706] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:13:06.611 [2024-12-15 18:44:06.954867] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:06.611 pt3 00:13:06.611 18:44:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.611 18:44:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:06.611 18:44:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:06.611 18:44:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:13:06.611 18:44:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:06.611 18:44:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:06.611 18:44:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:06.611 18:44:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:06.611 18:44:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:06.611 18:44:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:06.611 18:44:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:06.611 18:44:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:06.611 18:44:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:06.611 18:44:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.611 18:44:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:06.611 18:44:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.611 18:44:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.611 18:44:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.611 18:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:06.611 "name": "raid_bdev1", 00:13:06.611 "uuid": "7b500cd0-e98e-4bf4-8c8c-fda6803ec18e", 00:13:06.611 "strip_size_kb": 64, 00:13:06.611 "state": "online", 00:13:06.611 "raid_level": "raid5f", 00:13:06.611 "superblock": true, 00:13:06.611 "num_base_bdevs": 3, 00:13:06.611 "num_base_bdevs_discovered": 3, 00:13:06.611 "num_base_bdevs_operational": 3, 00:13:06.611 "base_bdevs_list": [ 00:13:06.611 { 00:13:06.611 "name": "pt1", 00:13:06.611 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:06.611 "is_configured": true, 00:13:06.611 "data_offset": 2048, 00:13:06.611 "data_size": 63488 00:13:06.611 }, 00:13:06.611 { 00:13:06.611 "name": "pt2", 00:13:06.611 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:06.611 "is_configured": true, 00:13:06.611 "data_offset": 2048, 00:13:06.611 "data_size": 63488 00:13:06.611 }, 00:13:06.611 { 00:13:06.611 "name": "pt3", 00:13:06.611 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:06.611 "is_configured": true, 00:13:06.611 "data_offset": 2048, 00:13:06.611 "data_size": 63488 00:13:06.611 } 00:13:06.611 ] 00:13:06.611 }' 00:13:06.611 18:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:06.611 18:44:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.188 18:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:13:07.188 18:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:13:07.188 18:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:07.188 18:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:07.188 18:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:07.188 18:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:07.188 18:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:07.188 18:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:07.188 18:44:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.188 18:44:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.188 [2024-12-15 18:44:07.396675] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:07.188 18:44:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.188 18:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:07.188 "name": "raid_bdev1", 00:13:07.188 "aliases": [ 00:13:07.188 "7b500cd0-e98e-4bf4-8c8c-fda6803ec18e" 00:13:07.188 ], 00:13:07.188 "product_name": "Raid Volume", 00:13:07.188 "block_size": 512, 00:13:07.188 "num_blocks": 126976, 00:13:07.188 "uuid": "7b500cd0-e98e-4bf4-8c8c-fda6803ec18e", 00:13:07.188 "assigned_rate_limits": { 00:13:07.188 "rw_ios_per_sec": 0, 00:13:07.188 "rw_mbytes_per_sec": 0, 00:13:07.188 "r_mbytes_per_sec": 0, 00:13:07.188 "w_mbytes_per_sec": 0 00:13:07.188 }, 00:13:07.188 "claimed": false, 00:13:07.188 "zoned": false, 00:13:07.188 "supported_io_types": { 00:13:07.188 "read": true, 00:13:07.188 "write": true, 00:13:07.188 "unmap": false, 00:13:07.188 "flush": false, 00:13:07.188 "reset": true, 00:13:07.188 "nvme_admin": false, 00:13:07.188 "nvme_io": false, 00:13:07.188 "nvme_io_md": false, 00:13:07.188 "write_zeroes": true, 00:13:07.188 "zcopy": false, 00:13:07.188 "get_zone_info": false, 00:13:07.188 "zone_management": false, 00:13:07.188 "zone_append": false, 00:13:07.188 "compare": false, 00:13:07.188 "compare_and_write": false, 00:13:07.188 "abort": false, 00:13:07.188 "seek_hole": false, 00:13:07.188 "seek_data": false, 00:13:07.188 "copy": false, 00:13:07.188 "nvme_iov_md": false 00:13:07.188 }, 00:13:07.188 "driver_specific": { 00:13:07.188 "raid": { 00:13:07.188 "uuid": "7b500cd0-e98e-4bf4-8c8c-fda6803ec18e", 00:13:07.188 "strip_size_kb": 64, 00:13:07.188 "state": "online", 00:13:07.188 "raid_level": "raid5f", 00:13:07.188 "superblock": true, 00:13:07.188 "num_base_bdevs": 3, 00:13:07.188 "num_base_bdevs_discovered": 3, 00:13:07.188 "num_base_bdevs_operational": 3, 00:13:07.188 "base_bdevs_list": [ 00:13:07.188 { 00:13:07.188 "name": "pt1", 00:13:07.188 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:07.188 "is_configured": true, 00:13:07.188 "data_offset": 2048, 00:13:07.188 "data_size": 63488 00:13:07.188 }, 00:13:07.188 { 00:13:07.188 "name": "pt2", 00:13:07.188 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:07.188 "is_configured": true, 00:13:07.188 "data_offset": 2048, 00:13:07.188 "data_size": 63488 00:13:07.188 }, 00:13:07.188 { 00:13:07.188 "name": "pt3", 00:13:07.188 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:07.188 "is_configured": true, 00:13:07.188 "data_offset": 2048, 00:13:07.188 "data_size": 63488 00:13:07.188 } 00:13:07.188 ] 00:13:07.188 } 00:13:07.188 } 00:13:07.188 }' 00:13:07.188 18:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:07.188 18:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:13:07.188 pt2 00:13:07.188 pt3' 00:13:07.188 18:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:07.188 18:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:07.188 18:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:07.188 18:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:13:07.188 18:44:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.188 18:44:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.188 18:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:07.188 18:44:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.188 18:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:07.188 18:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:07.188 18:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:07.188 18:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:13:07.188 18:44:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.188 18:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:07.188 18:44:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.188 18:44:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.464 18:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:07.464 18:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:07.464 18:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:07.464 18:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:13:07.464 18:44:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.464 18:44:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.464 18:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:07.464 18:44:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.464 18:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:07.464 18:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:07.464 18:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:07.464 18:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:13:07.464 18:44:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.464 18:44:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.464 [2024-12-15 18:44:07.692127] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:07.464 18:44:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.464 18:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 7b500cd0-e98e-4bf4-8c8c-fda6803ec18e '!=' 7b500cd0-e98e-4bf4-8c8c-fda6803ec18e ']' 00:13:07.464 18:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:13:07.464 18:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:07.464 18:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:13:07.464 18:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:13:07.464 18:44:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.464 18:44:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.464 [2024-12-15 18:44:07.739932] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:13:07.464 18:44:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.464 18:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:13:07.464 18:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:07.464 18:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:07.464 18:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:07.464 18:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:07.464 18:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:07.464 18:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:07.464 18:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:07.464 18:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:07.464 18:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:07.464 18:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:07.464 18:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:07.464 18:44:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.464 18:44:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.464 18:44:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.464 18:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:07.464 "name": "raid_bdev1", 00:13:07.464 "uuid": "7b500cd0-e98e-4bf4-8c8c-fda6803ec18e", 00:13:07.464 "strip_size_kb": 64, 00:13:07.464 "state": "online", 00:13:07.464 "raid_level": "raid5f", 00:13:07.464 "superblock": true, 00:13:07.464 "num_base_bdevs": 3, 00:13:07.464 "num_base_bdevs_discovered": 2, 00:13:07.464 "num_base_bdevs_operational": 2, 00:13:07.464 "base_bdevs_list": [ 00:13:07.464 { 00:13:07.464 "name": null, 00:13:07.464 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:07.464 "is_configured": false, 00:13:07.464 "data_offset": 0, 00:13:07.464 "data_size": 63488 00:13:07.464 }, 00:13:07.464 { 00:13:07.464 "name": "pt2", 00:13:07.464 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:07.464 "is_configured": true, 00:13:07.464 "data_offset": 2048, 00:13:07.464 "data_size": 63488 00:13:07.464 }, 00:13:07.464 { 00:13:07.464 "name": "pt3", 00:13:07.464 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:07.464 "is_configured": true, 00:13:07.464 "data_offset": 2048, 00:13:07.464 "data_size": 63488 00:13:07.464 } 00:13:07.464 ] 00:13:07.464 }' 00:13:07.464 18:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:07.465 18:44:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.724 18:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:07.724 18:44:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.724 18:44:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.724 [2024-12-15 18:44:08.163137] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:07.724 [2024-12-15 18:44:08.163206] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:07.724 [2024-12-15 18:44:08.163285] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:07.724 [2024-12-15 18:44:08.163357] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:07.724 [2024-12-15 18:44:08.163443] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:13:07.984 18:44:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.984 18:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:07.984 18:44:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.984 18:44:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.984 18:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:13:07.984 18:44:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.984 18:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:13:07.985 18:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:13:07.985 18:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:13:07.985 18:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:13:07.985 18:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:13:07.985 18:44:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.985 18:44:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.985 18:44:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.985 18:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:13:07.985 18:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:13:07.985 18:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:13:07.985 18:44:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.985 18:44:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.985 18:44:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.985 18:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:13:07.985 18:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:13:07.985 18:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:13:07.985 18:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:13:07.985 18:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:07.985 18:44:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.985 18:44:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.985 [2024-12-15 18:44:08.246994] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:07.985 [2024-12-15 18:44:08.247042] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:07.985 [2024-12-15 18:44:08.247059] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:13:07.985 [2024-12-15 18:44:08.247067] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:07.985 [2024-12-15 18:44:08.249185] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:07.985 [2024-12-15 18:44:08.249255] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:07.985 [2024-12-15 18:44:08.249344] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:07.985 [2024-12-15 18:44:08.249393] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:07.985 pt2 00:13:07.985 18:44:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.985 18:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:13:07.985 18:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:07.985 18:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:07.985 18:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:07.985 18:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:07.985 18:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:07.985 18:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:07.985 18:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:07.985 18:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:07.985 18:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:07.985 18:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:07.985 18:44:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.985 18:44:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.985 18:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:07.985 18:44:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.985 18:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:07.985 "name": "raid_bdev1", 00:13:07.985 "uuid": "7b500cd0-e98e-4bf4-8c8c-fda6803ec18e", 00:13:07.985 "strip_size_kb": 64, 00:13:07.985 "state": "configuring", 00:13:07.985 "raid_level": "raid5f", 00:13:07.985 "superblock": true, 00:13:07.985 "num_base_bdevs": 3, 00:13:07.985 "num_base_bdevs_discovered": 1, 00:13:07.985 "num_base_bdevs_operational": 2, 00:13:07.985 "base_bdevs_list": [ 00:13:07.985 { 00:13:07.985 "name": null, 00:13:07.985 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:07.985 "is_configured": false, 00:13:07.985 "data_offset": 2048, 00:13:07.985 "data_size": 63488 00:13:07.985 }, 00:13:07.985 { 00:13:07.985 "name": "pt2", 00:13:07.985 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:07.985 "is_configured": true, 00:13:07.985 "data_offset": 2048, 00:13:07.985 "data_size": 63488 00:13:07.985 }, 00:13:07.985 { 00:13:07.985 "name": null, 00:13:07.985 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:07.985 "is_configured": false, 00:13:07.985 "data_offset": 2048, 00:13:07.985 "data_size": 63488 00:13:07.985 } 00:13:07.985 ] 00:13:07.985 }' 00:13:07.985 18:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:07.985 18:44:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.245 18:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:13:08.245 18:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:13:08.245 18:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:13:08.245 18:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:08.245 18:44:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.245 18:44:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.245 [2024-12-15 18:44:08.658320] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:08.245 [2024-12-15 18:44:08.658378] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:08.245 [2024-12-15 18:44:08.658401] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:13:08.245 [2024-12-15 18:44:08.658409] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:08.245 [2024-12-15 18:44:08.658764] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:08.245 [2024-12-15 18:44:08.658788] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:08.245 [2024-12-15 18:44:08.658867] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:13:08.245 [2024-12-15 18:44:08.658887] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:08.245 [2024-12-15 18:44:08.658975] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:13:08.245 [2024-12-15 18:44:08.658988] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:08.245 [2024-12-15 18:44:08.659229] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:08.245 [2024-12-15 18:44:08.659665] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:13:08.245 [2024-12-15 18:44:08.659680] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:13:08.245 [2024-12-15 18:44:08.659898] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:08.245 pt3 00:13:08.245 18:44:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.245 18:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:13:08.245 18:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:08.245 18:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:08.245 18:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:08.245 18:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:08.245 18:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:08.245 18:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:08.245 18:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:08.245 18:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:08.245 18:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:08.245 18:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:08.245 18:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:08.245 18:44:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.245 18:44:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.505 18:44:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.505 18:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:08.505 "name": "raid_bdev1", 00:13:08.505 "uuid": "7b500cd0-e98e-4bf4-8c8c-fda6803ec18e", 00:13:08.505 "strip_size_kb": 64, 00:13:08.505 "state": "online", 00:13:08.505 "raid_level": "raid5f", 00:13:08.505 "superblock": true, 00:13:08.505 "num_base_bdevs": 3, 00:13:08.505 "num_base_bdevs_discovered": 2, 00:13:08.505 "num_base_bdevs_operational": 2, 00:13:08.505 "base_bdevs_list": [ 00:13:08.505 { 00:13:08.505 "name": null, 00:13:08.505 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:08.505 "is_configured": false, 00:13:08.505 "data_offset": 2048, 00:13:08.505 "data_size": 63488 00:13:08.506 }, 00:13:08.506 { 00:13:08.506 "name": "pt2", 00:13:08.506 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:08.506 "is_configured": true, 00:13:08.506 "data_offset": 2048, 00:13:08.506 "data_size": 63488 00:13:08.506 }, 00:13:08.506 { 00:13:08.506 "name": "pt3", 00:13:08.506 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:08.506 "is_configured": true, 00:13:08.506 "data_offset": 2048, 00:13:08.506 "data_size": 63488 00:13:08.506 } 00:13:08.506 ] 00:13:08.506 }' 00:13:08.506 18:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:08.506 18:44:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.765 18:44:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:08.765 18:44:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.765 18:44:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.765 [2024-12-15 18:44:09.101568] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:08.765 [2024-12-15 18:44:09.101637] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:08.765 [2024-12-15 18:44:09.101707] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:08.765 [2024-12-15 18:44:09.101770] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:08.765 [2024-12-15 18:44:09.101782] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:13:08.765 18:44:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.765 18:44:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:08.765 18:44:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:13:08.765 18:44:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.765 18:44:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.765 18:44:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.765 18:44:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:13:08.765 18:44:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:13:08.765 18:44:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:13:08.765 18:44:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:13:08.765 18:44:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:13:08.765 18:44:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.766 18:44:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.766 18:44:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.766 18:44:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:08.766 18:44:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.766 18:44:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.766 [2024-12-15 18:44:09.177430] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:08.766 [2024-12-15 18:44:09.177525] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:08.766 [2024-12-15 18:44:09.177560] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:13:08.766 [2024-12-15 18:44:09.177591] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:08.766 [2024-12-15 18:44:09.179780] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:08.766 [2024-12-15 18:44:09.179873] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:08.766 [2024-12-15 18:44:09.179944] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:13:08.766 [2024-12-15 18:44:09.179986] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:08.766 [2024-12-15 18:44:09.180102] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:13:08.766 [2024-12-15 18:44:09.180122] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:08.766 [2024-12-15 18:44:09.180138] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state configuring 00:13:08.766 [2024-12-15 18:44:09.180174] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:08.766 pt1 00:13:08.766 18:44:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.766 18:44:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:13:08.766 18:44:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:13:08.766 18:44:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:08.766 18:44:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:08.766 18:44:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:08.766 18:44:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:08.766 18:44:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:08.766 18:44:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:08.766 18:44:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:08.766 18:44:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:08.766 18:44:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:08.766 18:44:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:08.766 18:44:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:08.766 18:44:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.766 18:44:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.026 18:44:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.026 18:44:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:09.026 "name": "raid_bdev1", 00:13:09.026 "uuid": "7b500cd0-e98e-4bf4-8c8c-fda6803ec18e", 00:13:09.026 "strip_size_kb": 64, 00:13:09.026 "state": "configuring", 00:13:09.026 "raid_level": "raid5f", 00:13:09.026 "superblock": true, 00:13:09.026 "num_base_bdevs": 3, 00:13:09.026 "num_base_bdevs_discovered": 1, 00:13:09.026 "num_base_bdevs_operational": 2, 00:13:09.026 "base_bdevs_list": [ 00:13:09.026 { 00:13:09.026 "name": null, 00:13:09.026 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:09.026 "is_configured": false, 00:13:09.026 "data_offset": 2048, 00:13:09.026 "data_size": 63488 00:13:09.026 }, 00:13:09.026 { 00:13:09.026 "name": "pt2", 00:13:09.026 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:09.026 "is_configured": true, 00:13:09.026 "data_offset": 2048, 00:13:09.026 "data_size": 63488 00:13:09.026 }, 00:13:09.026 { 00:13:09.026 "name": null, 00:13:09.026 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:09.026 "is_configured": false, 00:13:09.026 "data_offset": 2048, 00:13:09.026 "data_size": 63488 00:13:09.026 } 00:13:09.026 ] 00:13:09.026 }' 00:13:09.026 18:44:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:09.026 18:44:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.286 18:44:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:13:09.286 18:44:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:13:09.286 18:44:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.286 18:44:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.286 18:44:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.286 18:44:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:13:09.286 18:44:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:09.286 18:44:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.286 18:44:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.286 [2024-12-15 18:44:09.656696] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:09.286 [2024-12-15 18:44:09.656794] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:09.286 [2024-12-15 18:44:09.656838] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:13:09.286 [2024-12-15 18:44:09.656870] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:09.286 [2024-12-15 18:44:09.657266] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:09.286 [2024-12-15 18:44:09.657331] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:09.286 [2024-12-15 18:44:09.657429] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:13:09.286 [2024-12-15 18:44:09.657483] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:09.286 [2024-12-15 18:44:09.657603] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:13:09.286 [2024-12-15 18:44:09.657643] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:09.286 [2024-12-15 18:44:09.657914] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:09.286 [2024-12-15 18:44:09.658369] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:13:09.286 [2024-12-15 18:44:09.658416] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:13:09.286 [2024-12-15 18:44:09.658609] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:09.286 pt3 00:13:09.286 18:44:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.286 18:44:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:13:09.286 18:44:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:09.286 18:44:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:09.286 18:44:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:09.286 18:44:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:09.286 18:44:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:09.286 18:44:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:09.286 18:44:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:09.286 18:44:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:09.286 18:44:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:09.286 18:44:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:09.286 18:44:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.286 18:44:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:09.286 18:44:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.287 18:44:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.287 18:44:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:09.287 "name": "raid_bdev1", 00:13:09.287 "uuid": "7b500cd0-e98e-4bf4-8c8c-fda6803ec18e", 00:13:09.287 "strip_size_kb": 64, 00:13:09.287 "state": "online", 00:13:09.287 "raid_level": "raid5f", 00:13:09.287 "superblock": true, 00:13:09.287 "num_base_bdevs": 3, 00:13:09.287 "num_base_bdevs_discovered": 2, 00:13:09.287 "num_base_bdevs_operational": 2, 00:13:09.287 "base_bdevs_list": [ 00:13:09.287 { 00:13:09.287 "name": null, 00:13:09.287 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:09.287 "is_configured": false, 00:13:09.287 "data_offset": 2048, 00:13:09.287 "data_size": 63488 00:13:09.287 }, 00:13:09.287 { 00:13:09.287 "name": "pt2", 00:13:09.287 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:09.287 "is_configured": true, 00:13:09.287 "data_offset": 2048, 00:13:09.287 "data_size": 63488 00:13:09.287 }, 00:13:09.287 { 00:13:09.287 "name": "pt3", 00:13:09.287 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:09.287 "is_configured": true, 00:13:09.287 "data_offset": 2048, 00:13:09.287 "data_size": 63488 00:13:09.287 } 00:13:09.287 ] 00:13:09.287 }' 00:13:09.287 18:44:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:09.287 18:44:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.856 18:44:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:13:09.856 18:44:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:13:09.856 18:44:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.856 18:44:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.856 18:44:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.856 18:44:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:13:09.856 18:44:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:09.856 18:44:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:13:09.856 18:44:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.856 18:44:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.857 [2024-12-15 18:44:10.120118] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:09.857 18:44:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.857 18:44:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 7b500cd0-e98e-4bf4-8c8c-fda6803ec18e '!=' 7b500cd0-e98e-4bf4-8c8c-fda6803ec18e ']' 00:13:09.857 18:44:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 93591 00:13:09.857 18:44:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 93591 ']' 00:13:09.857 18:44:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 93591 00:13:09.857 18:44:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:13:09.857 18:44:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:09.857 18:44:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 93591 00:13:09.857 18:44:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:09.857 18:44:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:09.857 18:44:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 93591' 00:13:09.857 killing process with pid 93591 00:13:09.857 18:44:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 93591 00:13:09.857 [2024-12-15 18:44:10.202417] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:09.857 [2024-12-15 18:44:10.202488] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:09.857 [2024-12-15 18:44:10.202545] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:09.857 [2024-12-15 18:44:10.202554] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:13:09.857 18:44:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 93591 00:13:09.857 [2024-12-15 18:44:10.235786] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:10.117 18:44:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:13:10.117 00:13:10.117 real 0m6.280s 00:13:10.117 user 0m10.476s 00:13:10.117 sys 0m1.417s 00:13:10.117 18:44:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:10.117 ************************************ 00:13:10.117 END TEST raid5f_superblock_test 00:13:10.117 ************************************ 00:13:10.117 18:44:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.117 18:44:10 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:13:10.117 18:44:10 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false true 00:13:10.117 18:44:10 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:10.117 18:44:10 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:10.117 18:44:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:10.117 ************************************ 00:13:10.117 START TEST raid5f_rebuild_test 00:13:10.117 ************************************ 00:13:10.117 18:44:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 false false true 00:13:10.117 18:44:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:13:10.117 18:44:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:13:10.117 18:44:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:13:10.117 18:44:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:13:10.117 18:44:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:10.117 18:44:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:10.117 18:44:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:10.117 18:44:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:10.117 18:44:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:10.117 18:44:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:10.117 18:44:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:10.117 18:44:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:10.117 18:44:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:10.117 18:44:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:13:10.117 18:44:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:10.117 18:44:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:10.117 18:44:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:10.117 18:44:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:10.117 18:44:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:10.117 18:44:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:10.117 18:44:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:10.117 18:44:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:10.117 18:44:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:10.117 18:44:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:13:10.117 18:44:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:13:10.117 18:44:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:13:10.117 18:44:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:13:10.117 18:44:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:13:10.117 18:44:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=94018 00:13:10.117 18:44:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:10.117 18:44:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 94018 00:13:10.117 18:44:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 94018 ']' 00:13:10.117 18:44:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:10.117 18:44:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:10.117 18:44:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:10.117 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:10.117 18:44:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:10.117 18:44:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.378 [2024-12-15 18:44:10.616092] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:13:10.378 [2024-12-15 18:44:10.616257] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:13:10.378 Zero copy mechanism will not be used. 00:13:10.378 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94018 ] 00:13:10.378 [2024-12-15 18:44:10.785880] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:10.378 [2024-12-15 18:44:10.810458] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:13:10.637 [2024-12-15 18:44:10.853395] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:10.637 [2024-12-15 18:44:10.853506] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:11.207 18:44:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:11.207 18:44:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:13:11.207 18:44:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:11.207 18:44:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:11.207 18:44:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.207 18:44:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.207 BaseBdev1_malloc 00:13:11.207 18:44:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.207 18:44:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:11.207 18:44:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.207 18:44:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.207 [2024-12-15 18:44:11.473189] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:11.207 [2024-12-15 18:44:11.473291] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:11.207 [2024-12-15 18:44:11.473335] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:11.207 [2024-12-15 18:44:11.473367] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:11.207 [2024-12-15 18:44:11.475464] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:11.207 [2024-12-15 18:44:11.475535] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:11.207 BaseBdev1 00:13:11.207 18:44:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.207 18:44:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:11.207 18:44:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:11.207 18:44:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.207 18:44:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.207 BaseBdev2_malloc 00:13:11.207 18:44:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.207 18:44:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:11.207 18:44:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.207 18:44:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.207 [2024-12-15 18:44:11.501736] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:11.207 [2024-12-15 18:44:11.501846] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:11.207 [2024-12-15 18:44:11.501883] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:11.207 [2024-12-15 18:44:11.501908] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:11.207 [2024-12-15 18:44:11.503925] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:11.207 [2024-12-15 18:44:11.503992] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:11.207 BaseBdev2 00:13:11.207 18:44:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.207 18:44:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:11.207 18:44:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:11.207 18:44:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.207 18:44:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.207 BaseBdev3_malloc 00:13:11.207 18:44:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.208 18:44:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:13:11.208 18:44:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.208 18:44:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.208 [2024-12-15 18:44:11.530279] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:13:11.208 [2024-12-15 18:44:11.530324] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:11.208 [2024-12-15 18:44:11.530347] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:11.208 [2024-12-15 18:44:11.530355] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:11.208 [2024-12-15 18:44:11.532351] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:11.208 [2024-12-15 18:44:11.532384] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:11.208 BaseBdev3 00:13:11.208 18:44:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.208 18:44:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:11.208 18:44:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.208 18:44:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.208 spare_malloc 00:13:11.208 18:44:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.208 18:44:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:11.208 18:44:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.208 18:44:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.208 spare_delay 00:13:11.208 18:44:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.208 18:44:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:11.208 18:44:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.208 18:44:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.208 [2024-12-15 18:44:11.586102] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:11.208 [2024-12-15 18:44:11.586205] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:11.208 [2024-12-15 18:44:11.586255] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:13:11.208 [2024-12-15 18:44:11.586269] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:11.208 [2024-12-15 18:44:11.588894] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:11.208 [2024-12-15 18:44:11.588935] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:11.208 spare 00:13:11.208 18:44:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.208 18:44:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:13:11.208 18:44:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.208 18:44:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.208 [2024-12-15 18:44:11.598121] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:11.208 [2024-12-15 18:44:11.599786] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:11.208 [2024-12-15 18:44:11.599890] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:11.208 [2024-12-15 18:44:11.599979] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:13:11.208 [2024-12-15 18:44:11.600006] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:13:11.208 [2024-12-15 18:44:11.600248] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:13:11.208 [2024-12-15 18:44:11.600696] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:13:11.208 [2024-12-15 18:44:11.600744] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:13:11.208 [2024-12-15 18:44:11.600906] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:11.208 18:44:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.208 18:44:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:13:11.208 18:44:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:11.208 18:44:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:11.208 18:44:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:11.208 18:44:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:11.208 18:44:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:11.208 18:44:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:11.208 18:44:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:11.208 18:44:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:11.208 18:44:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:11.208 18:44:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.208 18:44:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:11.208 18:44:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.208 18:44:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.208 18:44:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.468 18:44:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:11.468 "name": "raid_bdev1", 00:13:11.468 "uuid": "851fac42-3cb8-4f9b-84c7-ab494adf93ee", 00:13:11.468 "strip_size_kb": 64, 00:13:11.468 "state": "online", 00:13:11.468 "raid_level": "raid5f", 00:13:11.468 "superblock": false, 00:13:11.468 "num_base_bdevs": 3, 00:13:11.468 "num_base_bdevs_discovered": 3, 00:13:11.468 "num_base_bdevs_operational": 3, 00:13:11.468 "base_bdevs_list": [ 00:13:11.468 { 00:13:11.468 "name": "BaseBdev1", 00:13:11.468 "uuid": "8500174a-cc9f-57e9-885e-3f44d8c5103c", 00:13:11.468 "is_configured": true, 00:13:11.468 "data_offset": 0, 00:13:11.468 "data_size": 65536 00:13:11.468 }, 00:13:11.468 { 00:13:11.468 "name": "BaseBdev2", 00:13:11.468 "uuid": "24fa1c37-854d-5759-9b50-c08fc2295406", 00:13:11.468 "is_configured": true, 00:13:11.468 "data_offset": 0, 00:13:11.468 "data_size": 65536 00:13:11.468 }, 00:13:11.468 { 00:13:11.468 "name": "BaseBdev3", 00:13:11.468 "uuid": "5f592be3-83d3-5087-a7fb-972d9aa45941", 00:13:11.468 "is_configured": true, 00:13:11.468 "data_offset": 0, 00:13:11.468 "data_size": 65536 00:13:11.468 } 00:13:11.468 ] 00:13:11.468 }' 00:13:11.468 18:44:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:11.468 18:44:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.727 18:44:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:11.727 18:44:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:11.727 18:44:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.727 18:44:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.727 [2024-12-15 18:44:12.021731] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:11.727 18:44:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.727 18:44:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=131072 00:13:11.727 18:44:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.727 18:44:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.727 18:44:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.727 18:44:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:11.727 18:44:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.727 18:44:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:13:11.727 18:44:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:13:11.727 18:44:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:13:11.727 18:44:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:13:11.727 18:44:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:13:11.727 18:44:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:11.727 18:44:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:13:11.727 18:44:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:11.727 18:44:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:11.727 18:44:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:11.727 18:44:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:13:11.727 18:44:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:11.727 18:44:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:11.727 18:44:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:13:11.987 [2024-12-15 18:44:12.293187] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:11.987 /dev/nbd0 00:13:11.987 18:44:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:11.987 18:44:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:11.987 18:44:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:11.987 18:44:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:13:11.987 18:44:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:11.987 18:44:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:11.987 18:44:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:11.987 18:44:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:13:11.987 18:44:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:11.987 18:44:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:11.987 18:44:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:11.987 1+0 records in 00:13:11.987 1+0 records out 00:13:11.987 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000521254 s, 7.9 MB/s 00:13:11.987 18:44:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:11.987 18:44:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:13:11.987 18:44:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:11.987 18:44:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:11.987 18:44:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:13:11.987 18:44:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:11.987 18:44:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:11.987 18:44:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:13:11.987 18:44:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:13:11.987 18:44:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 128 00:13:11.987 18:44:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:13:12.246 512+0 records in 00:13:12.246 512+0 records out 00:13:12.246 67108864 bytes (67 MB, 64 MiB) copied, 0.287614 s, 233 MB/s 00:13:12.246 18:44:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:12.246 18:44:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:12.246 18:44:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:12.246 18:44:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:12.246 18:44:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:13:12.246 18:44:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:12.246 18:44:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:12.506 18:44:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:12.506 [2024-12-15 18:44:12.877982] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:12.506 18:44:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:12.506 18:44:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:12.506 18:44:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:12.506 18:44:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:12.506 18:44:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:12.506 18:44:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:12.506 18:44:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:12.506 18:44:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:12.506 18:44:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.506 18:44:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.506 [2024-12-15 18:44:12.894067] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:12.506 18:44:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.506 18:44:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:13:12.506 18:44:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:12.506 18:44:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:12.506 18:44:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:12.506 18:44:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:12.506 18:44:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:12.506 18:44:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:12.506 18:44:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:12.506 18:44:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:12.506 18:44:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:12.506 18:44:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:12.506 18:44:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:12.506 18:44:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.506 18:44:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.506 18:44:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.766 18:44:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:12.766 "name": "raid_bdev1", 00:13:12.766 "uuid": "851fac42-3cb8-4f9b-84c7-ab494adf93ee", 00:13:12.766 "strip_size_kb": 64, 00:13:12.766 "state": "online", 00:13:12.766 "raid_level": "raid5f", 00:13:12.766 "superblock": false, 00:13:12.766 "num_base_bdevs": 3, 00:13:12.766 "num_base_bdevs_discovered": 2, 00:13:12.766 "num_base_bdevs_operational": 2, 00:13:12.766 "base_bdevs_list": [ 00:13:12.766 { 00:13:12.766 "name": null, 00:13:12.766 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:12.766 "is_configured": false, 00:13:12.766 "data_offset": 0, 00:13:12.766 "data_size": 65536 00:13:12.766 }, 00:13:12.766 { 00:13:12.766 "name": "BaseBdev2", 00:13:12.766 "uuid": "24fa1c37-854d-5759-9b50-c08fc2295406", 00:13:12.766 "is_configured": true, 00:13:12.766 "data_offset": 0, 00:13:12.766 "data_size": 65536 00:13:12.766 }, 00:13:12.766 { 00:13:12.766 "name": "BaseBdev3", 00:13:12.766 "uuid": "5f592be3-83d3-5087-a7fb-972d9aa45941", 00:13:12.766 "is_configured": true, 00:13:12.766 "data_offset": 0, 00:13:12.766 "data_size": 65536 00:13:12.766 } 00:13:12.766 ] 00:13:12.766 }' 00:13:12.766 18:44:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:12.766 18:44:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.026 18:44:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:13.026 18:44:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.026 18:44:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.026 [2024-12-15 18:44:13.309341] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:13.026 [2024-12-15 18:44:13.314063] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b4e0 00:13:13.026 18:44:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.026 18:44:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:13.026 [2024-12-15 18:44:13.316188] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:13.964 18:44:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:13.964 18:44:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:13.964 18:44:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:13.964 18:44:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:13.964 18:44:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:13.964 18:44:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:13.964 18:44:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:13.964 18:44:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.964 18:44:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.964 18:44:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.964 18:44:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:13.964 "name": "raid_bdev1", 00:13:13.964 "uuid": "851fac42-3cb8-4f9b-84c7-ab494adf93ee", 00:13:13.964 "strip_size_kb": 64, 00:13:13.964 "state": "online", 00:13:13.964 "raid_level": "raid5f", 00:13:13.964 "superblock": false, 00:13:13.964 "num_base_bdevs": 3, 00:13:13.964 "num_base_bdevs_discovered": 3, 00:13:13.964 "num_base_bdevs_operational": 3, 00:13:13.964 "process": { 00:13:13.964 "type": "rebuild", 00:13:13.964 "target": "spare", 00:13:13.964 "progress": { 00:13:13.964 "blocks": 20480, 00:13:13.964 "percent": 15 00:13:13.964 } 00:13:13.964 }, 00:13:13.964 "base_bdevs_list": [ 00:13:13.964 { 00:13:13.964 "name": "spare", 00:13:13.964 "uuid": "763e22e3-4376-5e94-b789-a141028707d8", 00:13:13.964 "is_configured": true, 00:13:13.964 "data_offset": 0, 00:13:13.964 "data_size": 65536 00:13:13.964 }, 00:13:13.964 { 00:13:13.964 "name": "BaseBdev2", 00:13:13.964 "uuid": "24fa1c37-854d-5759-9b50-c08fc2295406", 00:13:13.964 "is_configured": true, 00:13:13.964 "data_offset": 0, 00:13:13.964 "data_size": 65536 00:13:13.964 }, 00:13:13.964 { 00:13:13.964 "name": "BaseBdev3", 00:13:13.964 "uuid": "5f592be3-83d3-5087-a7fb-972d9aa45941", 00:13:13.964 "is_configured": true, 00:13:13.964 "data_offset": 0, 00:13:13.964 "data_size": 65536 00:13:13.964 } 00:13:13.964 ] 00:13:13.964 }' 00:13:13.964 18:44:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:14.225 18:44:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:14.225 18:44:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:14.225 18:44:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:14.225 18:44:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:14.225 18:44:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.225 18:44:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.225 [2024-12-15 18:44:14.476738] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:14.225 [2024-12-15 18:44:14.523466] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:14.225 [2024-12-15 18:44:14.523525] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:14.225 [2024-12-15 18:44:14.523541] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:14.225 [2024-12-15 18:44:14.523550] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:14.225 18:44:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.225 18:44:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:13:14.225 18:44:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:14.225 18:44:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:14.225 18:44:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:14.225 18:44:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:14.225 18:44:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:14.225 18:44:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:14.225 18:44:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:14.225 18:44:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:14.225 18:44:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:14.225 18:44:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:14.225 18:44:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:14.225 18:44:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.225 18:44:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.225 18:44:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.225 18:44:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:14.225 "name": "raid_bdev1", 00:13:14.225 "uuid": "851fac42-3cb8-4f9b-84c7-ab494adf93ee", 00:13:14.225 "strip_size_kb": 64, 00:13:14.225 "state": "online", 00:13:14.225 "raid_level": "raid5f", 00:13:14.225 "superblock": false, 00:13:14.225 "num_base_bdevs": 3, 00:13:14.225 "num_base_bdevs_discovered": 2, 00:13:14.225 "num_base_bdevs_operational": 2, 00:13:14.225 "base_bdevs_list": [ 00:13:14.225 { 00:13:14.225 "name": null, 00:13:14.225 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:14.225 "is_configured": false, 00:13:14.225 "data_offset": 0, 00:13:14.225 "data_size": 65536 00:13:14.225 }, 00:13:14.225 { 00:13:14.225 "name": "BaseBdev2", 00:13:14.225 "uuid": "24fa1c37-854d-5759-9b50-c08fc2295406", 00:13:14.225 "is_configured": true, 00:13:14.225 "data_offset": 0, 00:13:14.225 "data_size": 65536 00:13:14.225 }, 00:13:14.225 { 00:13:14.225 "name": "BaseBdev3", 00:13:14.225 "uuid": "5f592be3-83d3-5087-a7fb-972d9aa45941", 00:13:14.225 "is_configured": true, 00:13:14.225 "data_offset": 0, 00:13:14.225 "data_size": 65536 00:13:14.225 } 00:13:14.225 ] 00:13:14.225 }' 00:13:14.225 18:44:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:14.225 18:44:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.794 18:44:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:14.794 18:44:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:14.794 18:44:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:14.794 18:44:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:14.794 18:44:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:14.794 18:44:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:14.794 18:44:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:14.794 18:44:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.794 18:44:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.794 18:44:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.794 18:44:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:14.794 "name": "raid_bdev1", 00:13:14.794 "uuid": "851fac42-3cb8-4f9b-84c7-ab494adf93ee", 00:13:14.794 "strip_size_kb": 64, 00:13:14.794 "state": "online", 00:13:14.794 "raid_level": "raid5f", 00:13:14.794 "superblock": false, 00:13:14.794 "num_base_bdevs": 3, 00:13:14.794 "num_base_bdevs_discovered": 2, 00:13:14.794 "num_base_bdevs_operational": 2, 00:13:14.794 "base_bdevs_list": [ 00:13:14.794 { 00:13:14.794 "name": null, 00:13:14.794 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:14.794 "is_configured": false, 00:13:14.794 "data_offset": 0, 00:13:14.794 "data_size": 65536 00:13:14.794 }, 00:13:14.794 { 00:13:14.794 "name": "BaseBdev2", 00:13:14.794 "uuid": "24fa1c37-854d-5759-9b50-c08fc2295406", 00:13:14.794 "is_configured": true, 00:13:14.794 "data_offset": 0, 00:13:14.794 "data_size": 65536 00:13:14.794 }, 00:13:14.794 { 00:13:14.794 "name": "BaseBdev3", 00:13:14.794 "uuid": "5f592be3-83d3-5087-a7fb-972d9aa45941", 00:13:14.794 "is_configured": true, 00:13:14.794 "data_offset": 0, 00:13:14.794 "data_size": 65536 00:13:14.794 } 00:13:14.794 ] 00:13:14.794 }' 00:13:14.794 18:44:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:14.794 18:44:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:14.794 18:44:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:14.794 18:44:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:14.794 18:44:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:14.794 18:44:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.794 18:44:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.794 [2024-12-15 18:44:15.132522] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:14.794 [2024-12-15 18:44:15.136994] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b5b0 00:13:14.794 18:44:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.794 18:44:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:14.794 [2024-12-15 18:44:15.139063] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:15.733 18:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:15.733 18:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:15.733 18:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:15.733 18:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:15.733 18:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:15.733 18:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:15.733 18:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:15.733 18:44:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.733 18:44:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.733 18:44:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.992 18:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:15.992 "name": "raid_bdev1", 00:13:15.992 "uuid": "851fac42-3cb8-4f9b-84c7-ab494adf93ee", 00:13:15.992 "strip_size_kb": 64, 00:13:15.992 "state": "online", 00:13:15.992 "raid_level": "raid5f", 00:13:15.992 "superblock": false, 00:13:15.992 "num_base_bdevs": 3, 00:13:15.992 "num_base_bdevs_discovered": 3, 00:13:15.992 "num_base_bdevs_operational": 3, 00:13:15.992 "process": { 00:13:15.992 "type": "rebuild", 00:13:15.992 "target": "spare", 00:13:15.992 "progress": { 00:13:15.992 "blocks": 20480, 00:13:15.992 "percent": 15 00:13:15.992 } 00:13:15.992 }, 00:13:15.992 "base_bdevs_list": [ 00:13:15.992 { 00:13:15.992 "name": "spare", 00:13:15.992 "uuid": "763e22e3-4376-5e94-b789-a141028707d8", 00:13:15.992 "is_configured": true, 00:13:15.992 "data_offset": 0, 00:13:15.992 "data_size": 65536 00:13:15.992 }, 00:13:15.992 { 00:13:15.992 "name": "BaseBdev2", 00:13:15.992 "uuid": "24fa1c37-854d-5759-9b50-c08fc2295406", 00:13:15.992 "is_configured": true, 00:13:15.992 "data_offset": 0, 00:13:15.992 "data_size": 65536 00:13:15.992 }, 00:13:15.992 { 00:13:15.992 "name": "BaseBdev3", 00:13:15.992 "uuid": "5f592be3-83d3-5087-a7fb-972d9aa45941", 00:13:15.992 "is_configured": true, 00:13:15.992 "data_offset": 0, 00:13:15.992 "data_size": 65536 00:13:15.992 } 00:13:15.992 ] 00:13:15.992 }' 00:13:15.992 18:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:15.992 18:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:15.992 18:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:15.992 18:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:15.992 18:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:13:15.992 18:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:13:15.992 18:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:13:15.992 18:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=453 00:13:15.992 18:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:15.992 18:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:15.992 18:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:15.992 18:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:15.992 18:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:15.992 18:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:15.992 18:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:15.992 18:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:15.992 18:44:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.992 18:44:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.992 18:44:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.992 18:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:15.992 "name": "raid_bdev1", 00:13:15.992 "uuid": "851fac42-3cb8-4f9b-84c7-ab494adf93ee", 00:13:15.992 "strip_size_kb": 64, 00:13:15.992 "state": "online", 00:13:15.992 "raid_level": "raid5f", 00:13:15.992 "superblock": false, 00:13:15.992 "num_base_bdevs": 3, 00:13:15.992 "num_base_bdevs_discovered": 3, 00:13:15.992 "num_base_bdevs_operational": 3, 00:13:15.992 "process": { 00:13:15.992 "type": "rebuild", 00:13:15.992 "target": "spare", 00:13:15.992 "progress": { 00:13:15.992 "blocks": 22528, 00:13:15.992 "percent": 17 00:13:15.992 } 00:13:15.992 }, 00:13:15.992 "base_bdevs_list": [ 00:13:15.992 { 00:13:15.992 "name": "spare", 00:13:15.992 "uuid": "763e22e3-4376-5e94-b789-a141028707d8", 00:13:15.992 "is_configured": true, 00:13:15.992 "data_offset": 0, 00:13:15.992 "data_size": 65536 00:13:15.992 }, 00:13:15.992 { 00:13:15.992 "name": "BaseBdev2", 00:13:15.992 "uuid": "24fa1c37-854d-5759-9b50-c08fc2295406", 00:13:15.992 "is_configured": true, 00:13:15.992 "data_offset": 0, 00:13:15.992 "data_size": 65536 00:13:15.992 }, 00:13:15.992 { 00:13:15.992 "name": "BaseBdev3", 00:13:15.992 "uuid": "5f592be3-83d3-5087-a7fb-972d9aa45941", 00:13:15.992 "is_configured": true, 00:13:15.992 "data_offset": 0, 00:13:15.992 "data_size": 65536 00:13:15.992 } 00:13:15.992 ] 00:13:15.992 }' 00:13:15.992 18:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:15.992 18:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:15.992 18:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:15.992 18:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:15.992 18:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:17.373 18:44:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:17.373 18:44:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:17.373 18:44:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:17.373 18:44:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:17.373 18:44:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:17.373 18:44:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:17.373 18:44:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.373 18:44:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:17.373 18:44:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.373 18:44:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.373 18:44:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.373 18:44:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:17.373 "name": "raid_bdev1", 00:13:17.373 "uuid": "851fac42-3cb8-4f9b-84c7-ab494adf93ee", 00:13:17.373 "strip_size_kb": 64, 00:13:17.373 "state": "online", 00:13:17.373 "raid_level": "raid5f", 00:13:17.373 "superblock": false, 00:13:17.373 "num_base_bdevs": 3, 00:13:17.373 "num_base_bdevs_discovered": 3, 00:13:17.373 "num_base_bdevs_operational": 3, 00:13:17.373 "process": { 00:13:17.373 "type": "rebuild", 00:13:17.373 "target": "spare", 00:13:17.373 "progress": { 00:13:17.373 "blocks": 45056, 00:13:17.373 "percent": 34 00:13:17.373 } 00:13:17.373 }, 00:13:17.373 "base_bdevs_list": [ 00:13:17.373 { 00:13:17.373 "name": "spare", 00:13:17.373 "uuid": "763e22e3-4376-5e94-b789-a141028707d8", 00:13:17.373 "is_configured": true, 00:13:17.373 "data_offset": 0, 00:13:17.373 "data_size": 65536 00:13:17.373 }, 00:13:17.373 { 00:13:17.373 "name": "BaseBdev2", 00:13:17.373 "uuid": "24fa1c37-854d-5759-9b50-c08fc2295406", 00:13:17.373 "is_configured": true, 00:13:17.373 "data_offset": 0, 00:13:17.373 "data_size": 65536 00:13:17.373 }, 00:13:17.373 { 00:13:17.373 "name": "BaseBdev3", 00:13:17.373 "uuid": "5f592be3-83d3-5087-a7fb-972d9aa45941", 00:13:17.373 "is_configured": true, 00:13:17.373 "data_offset": 0, 00:13:17.373 "data_size": 65536 00:13:17.373 } 00:13:17.373 ] 00:13:17.373 }' 00:13:17.373 18:44:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:17.373 18:44:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:17.373 18:44:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:17.373 18:44:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:17.373 18:44:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:18.354 18:44:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:18.354 18:44:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:18.354 18:44:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:18.354 18:44:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:18.354 18:44:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:18.354 18:44:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:18.354 18:44:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.355 18:44:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:18.355 18:44:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.355 18:44:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.355 18:44:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.355 18:44:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:18.355 "name": "raid_bdev1", 00:13:18.355 "uuid": "851fac42-3cb8-4f9b-84c7-ab494adf93ee", 00:13:18.355 "strip_size_kb": 64, 00:13:18.355 "state": "online", 00:13:18.355 "raid_level": "raid5f", 00:13:18.355 "superblock": false, 00:13:18.355 "num_base_bdevs": 3, 00:13:18.355 "num_base_bdevs_discovered": 3, 00:13:18.355 "num_base_bdevs_operational": 3, 00:13:18.355 "process": { 00:13:18.355 "type": "rebuild", 00:13:18.355 "target": "spare", 00:13:18.355 "progress": { 00:13:18.355 "blocks": 69632, 00:13:18.355 "percent": 53 00:13:18.355 } 00:13:18.355 }, 00:13:18.355 "base_bdevs_list": [ 00:13:18.355 { 00:13:18.355 "name": "spare", 00:13:18.355 "uuid": "763e22e3-4376-5e94-b789-a141028707d8", 00:13:18.355 "is_configured": true, 00:13:18.355 "data_offset": 0, 00:13:18.355 "data_size": 65536 00:13:18.355 }, 00:13:18.355 { 00:13:18.355 "name": "BaseBdev2", 00:13:18.355 "uuid": "24fa1c37-854d-5759-9b50-c08fc2295406", 00:13:18.355 "is_configured": true, 00:13:18.355 "data_offset": 0, 00:13:18.355 "data_size": 65536 00:13:18.355 }, 00:13:18.355 { 00:13:18.355 "name": "BaseBdev3", 00:13:18.355 "uuid": "5f592be3-83d3-5087-a7fb-972d9aa45941", 00:13:18.355 "is_configured": true, 00:13:18.355 "data_offset": 0, 00:13:18.355 "data_size": 65536 00:13:18.355 } 00:13:18.355 ] 00:13:18.355 }' 00:13:18.355 18:44:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:18.355 18:44:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:18.355 18:44:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:18.355 18:44:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:18.355 18:44:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:19.294 18:44:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:19.294 18:44:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:19.294 18:44:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:19.294 18:44:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:19.294 18:44:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:19.294 18:44:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:19.294 18:44:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.294 18:44:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:19.294 18:44:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.554 18:44:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.554 18:44:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.554 18:44:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:19.554 "name": "raid_bdev1", 00:13:19.554 "uuid": "851fac42-3cb8-4f9b-84c7-ab494adf93ee", 00:13:19.554 "strip_size_kb": 64, 00:13:19.554 "state": "online", 00:13:19.554 "raid_level": "raid5f", 00:13:19.554 "superblock": false, 00:13:19.554 "num_base_bdevs": 3, 00:13:19.554 "num_base_bdevs_discovered": 3, 00:13:19.554 "num_base_bdevs_operational": 3, 00:13:19.554 "process": { 00:13:19.554 "type": "rebuild", 00:13:19.554 "target": "spare", 00:13:19.554 "progress": { 00:13:19.554 "blocks": 92160, 00:13:19.554 "percent": 70 00:13:19.554 } 00:13:19.554 }, 00:13:19.554 "base_bdevs_list": [ 00:13:19.554 { 00:13:19.554 "name": "spare", 00:13:19.554 "uuid": "763e22e3-4376-5e94-b789-a141028707d8", 00:13:19.554 "is_configured": true, 00:13:19.554 "data_offset": 0, 00:13:19.554 "data_size": 65536 00:13:19.554 }, 00:13:19.554 { 00:13:19.554 "name": "BaseBdev2", 00:13:19.554 "uuid": "24fa1c37-854d-5759-9b50-c08fc2295406", 00:13:19.554 "is_configured": true, 00:13:19.554 "data_offset": 0, 00:13:19.554 "data_size": 65536 00:13:19.554 }, 00:13:19.554 { 00:13:19.554 "name": "BaseBdev3", 00:13:19.554 "uuid": "5f592be3-83d3-5087-a7fb-972d9aa45941", 00:13:19.554 "is_configured": true, 00:13:19.554 "data_offset": 0, 00:13:19.554 "data_size": 65536 00:13:19.554 } 00:13:19.554 ] 00:13:19.554 }' 00:13:19.554 18:44:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:19.554 18:44:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:19.554 18:44:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:19.554 18:44:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:19.554 18:44:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:20.492 18:44:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:20.492 18:44:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:20.492 18:44:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:20.492 18:44:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:20.492 18:44:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:20.493 18:44:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:20.493 18:44:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:20.493 18:44:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:20.493 18:44:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.493 18:44:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.493 18:44:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.493 18:44:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:20.493 "name": "raid_bdev1", 00:13:20.493 "uuid": "851fac42-3cb8-4f9b-84c7-ab494adf93ee", 00:13:20.493 "strip_size_kb": 64, 00:13:20.493 "state": "online", 00:13:20.493 "raid_level": "raid5f", 00:13:20.493 "superblock": false, 00:13:20.493 "num_base_bdevs": 3, 00:13:20.493 "num_base_bdevs_discovered": 3, 00:13:20.493 "num_base_bdevs_operational": 3, 00:13:20.493 "process": { 00:13:20.493 "type": "rebuild", 00:13:20.493 "target": "spare", 00:13:20.493 "progress": { 00:13:20.493 "blocks": 116736, 00:13:20.493 "percent": 89 00:13:20.493 } 00:13:20.493 }, 00:13:20.493 "base_bdevs_list": [ 00:13:20.493 { 00:13:20.493 "name": "spare", 00:13:20.493 "uuid": "763e22e3-4376-5e94-b789-a141028707d8", 00:13:20.493 "is_configured": true, 00:13:20.493 "data_offset": 0, 00:13:20.493 "data_size": 65536 00:13:20.493 }, 00:13:20.493 { 00:13:20.493 "name": "BaseBdev2", 00:13:20.493 "uuid": "24fa1c37-854d-5759-9b50-c08fc2295406", 00:13:20.493 "is_configured": true, 00:13:20.493 "data_offset": 0, 00:13:20.493 "data_size": 65536 00:13:20.493 }, 00:13:20.493 { 00:13:20.493 "name": "BaseBdev3", 00:13:20.493 "uuid": "5f592be3-83d3-5087-a7fb-972d9aa45941", 00:13:20.493 "is_configured": true, 00:13:20.493 "data_offset": 0, 00:13:20.493 "data_size": 65536 00:13:20.493 } 00:13:20.493 ] 00:13:20.493 }' 00:13:20.493 18:44:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:20.753 18:44:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:20.753 18:44:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:20.753 18:44:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:20.753 18:44:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:21.322 [2024-12-15 18:44:21.574657] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:21.322 [2024-12-15 18:44:21.574787] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:21.322 [2024-12-15 18:44:21.574886] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:21.581 18:44:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:21.581 18:44:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:21.581 18:44:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:21.581 18:44:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:21.581 18:44:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:21.581 18:44:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:21.581 18:44:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:21.581 18:44:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:21.581 18:44:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.581 18:44:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.841 18:44:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.841 18:44:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:21.841 "name": "raid_bdev1", 00:13:21.841 "uuid": "851fac42-3cb8-4f9b-84c7-ab494adf93ee", 00:13:21.841 "strip_size_kb": 64, 00:13:21.841 "state": "online", 00:13:21.841 "raid_level": "raid5f", 00:13:21.841 "superblock": false, 00:13:21.841 "num_base_bdevs": 3, 00:13:21.841 "num_base_bdevs_discovered": 3, 00:13:21.841 "num_base_bdevs_operational": 3, 00:13:21.841 "base_bdevs_list": [ 00:13:21.841 { 00:13:21.841 "name": "spare", 00:13:21.841 "uuid": "763e22e3-4376-5e94-b789-a141028707d8", 00:13:21.841 "is_configured": true, 00:13:21.841 "data_offset": 0, 00:13:21.841 "data_size": 65536 00:13:21.841 }, 00:13:21.841 { 00:13:21.841 "name": "BaseBdev2", 00:13:21.841 "uuid": "24fa1c37-854d-5759-9b50-c08fc2295406", 00:13:21.841 "is_configured": true, 00:13:21.841 "data_offset": 0, 00:13:21.841 "data_size": 65536 00:13:21.841 }, 00:13:21.841 { 00:13:21.841 "name": "BaseBdev3", 00:13:21.841 "uuid": "5f592be3-83d3-5087-a7fb-972d9aa45941", 00:13:21.841 "is_configured": true, 00:13:21.841 "data_offset": 0, 00:13:21.841 "data_size": 65536 00:13:21.841 } 00:13:21.841 ] 00:13:21.841 }' 00:13:21.841 18:44:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:21.841 18:44:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:21.841 18:44:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:21.842 18:44:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:21.842 18:44:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:13:21.842 18:44:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:21.842 18:44:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:21.842 18:44:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:21.842 18:44:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:21.842 18:44:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:21.842 18:44:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:21.842 18:44:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:21.842 18:44:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.842 18:44:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.842 18:44:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.842 18:44:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:21.842 "name": "raid_bdev1", 00:13:21.842 "uuid": "851fac42-3cb8-4f9b-84c7-ab494adf93ee", 00:13:21.842 "strip_size_kb": 64, 00:13:21.842 "state": "online", 00:13:21.842 "raid_level": "raid5f", 00:13:21.842 "superblock": false, 00:13:21.842 "num_base_bdevs": 3, 00:13:21.842 "num_base_bdevs_discovered": 3, 00:13:21.842 "num_base_bdevs_operational": 3, 00:13:21.842 "base_bdevs_list": [ 00:13:21.842 { 00:13:21.842 "name": "spare", 00:13:21.842 "uuid": "763e22e3-4376-5e94-b789-a141028707d8", 00:13:21.842 "is_configured": true, 00:13:21.842 "data_offset": 0, 00:13:21.842 "data_size": 65536 00:13:21.842 }, 00:13:21.842 { 00:13:21.842 "name": "BaseBdev2", 00:13:21.842 "uuid": "24fa1c37-854d-5759-9b50-c08fc2295406", 00:13:21.842 "is_configured": true, 00:13:21.842 "data_offset": 0, 00:13:21.842 "data_size": 65536 00:13:21.842 }, 00:13:21.842 { 00:13:21.842 "name": "BaseBdev3", 00:13:21.842 "uuid": "5f592be3-83d3-5087-a7fb-972d9aa45941", 00:13:21.842 "is_configured": true, 00:13:21.842 "data_offset": 0, 00:13:21.842 "data_size": 65536 00:13:21.842 } 00:13:21.842 ] 00:13:21.842 }' 00:13:21.842 18:44:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:21.842 18:44:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:21.842 18:44:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:21.842 18:44:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:22.102 18:44:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:13:22.102 18:44:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:22.102 18:44:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:22.102 18:44:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:22.102 18:44:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:22.102 18:44:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:22.102 18:44:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:22.102 18:44:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:22.102 18:44:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:22.102 18:44:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:22.102 18:44:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:22.102 18:44:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.102 18:44:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.102 18:44:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.102 18:44:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.102 18:44:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:22.102 "name": "raid_bdev1", 00:13:22.102 "uuid": "851fac42-3cb8-4f9b-84c7-ab494adf93ee", 00:13:22.102 "strip_size_kb": 64, 00:13:22.102 "state": "online", 00:13:22.102 "raid_level": "raid5f", 00:13:22.102 "superblock": false, 00:13:22.102 "num_base_bdevs": 3, 00:13:22.102 "num_base_bdevs_discovered": 3, 00:13:22.102 "num_base_bdevs_operational": 3, 00:13:22.102 "base_bdevs_list": [ 00:13:22.102 { 00:13:22.102 "name": "spare", 00:13:22.102 "uuid": "763e22e3-4376-5e94-b789-a141028707d8", 00:13:22.102 "is_configured": true, 00:13:22.102 "data_offset": 0, 00:13:22.102 "data_size": 65536 00:13:22.102 }, 00:13:22.102 { 00:13:22.102 "name": "BaseBdev2", 00:13:22.102 "uuid": "24fa1c37-854d-5759-9b50-c08fc2295406", 00:13:22.102 "is_configured": true, 00:13:22.102 "data_offset": 0, 00:13:22.102 "data_size": 65536 00:13:22.102 }, 00:13:22.102 { 00:13:22.102 "name": "BaseBdev3", 00:13:22.102 "uuid": "5f592be3-83d3-5087-a7fb-972d9aa45941", 00:13:22.102 "is_configured": true, 00:13:22.102 "data_offset": 0, 00:13:22.102 "data_size": 65536 00:13:22.102 } 00:13:22.102 ] 00:13:22.102 }' 00:13:22.102 18:44:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:22.102 18:44:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.362 18:44:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:22.362 18:44:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.362 18:44:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.362 [2024-12-15 18:44:22.762196] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:22.362 [2024-12-15 18:44:22.762269] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:22.362 [2024-12-15 18:44:22.762393] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:22.362 [2024-12-15 18:44:22.762490] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:22.362 [2024-12-15 18:44:22.762532] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:13:22.362 18:44:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.362 18:44:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.362 18:44:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:13:22.362 18:44:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.362 18:44:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.362 18:44:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.362 18:44:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:22.362 18:44:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:22.362 18:44:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:13:22.362 18:44:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:13:22.362 18:44:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:22.362 18:44:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:13:22.362 18:44:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:22.362 18:44:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:22.362 18:44:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:22.362 18:44:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:13:22.362 18:44:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:22.362 18:44:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:22.362 18:44:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:13:22.622 /dev/nbd0 00:13:22.622 18:44:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:22.622 18:44:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:22.622 18:44:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:22.622 18:44:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:13:22.622 18:44:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:22.622 18:44:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:22.622 18:44:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:22.622 18:44:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:13:22.622 18:44:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:22.622 18:44:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:22.622 18:44:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:22.622 1+0 records in 00:13:22.622 1+0 records out 00:13:22.622 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000311681 s, 13.1 MB/s 00:13:22.622 18:44:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:22.622 18:44:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:13:22.622 18:44:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:22.622 18:44:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:22.622 18:44:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:13:22.622 18:44:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:22.622 18:44:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:22.622 18:44:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:13:22.882 /dev/nbd1 00:13:22.882 18:44:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:22.882 18:44:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:22.882 18:44:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:22.882 18:44:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:13:22.882 18:44:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:22.882 18:44:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:22.882 18:44:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:22.882 18:44:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:13:22.882 18:44:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:22.882 18:44:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:22.882 18:44:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:22.882 1+0 records in 00:13:22.882 1+0 records out 00:13:22.882 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000404201 s, 10.1 MB/s 00:13:22.882 18:44:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:22.882 18:44:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:13:22.882 18:44:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:22.882 18:44:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:22.882 18:44:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:13:22.882 18:44:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:22.882 18:44:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:22.882 18:44:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:13:23.142 18:44:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:13:23.142 18:44:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:23.142 18:44:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:23.142 18:44:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:23.142 18:44:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:13:23.142 18:44:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:23.142 18:44:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:23.142 18:44:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:23.142 18:44:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:23.142 18:44:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:23.142 18:44:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:23.142 18:44:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:23.142 18:44:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:23.142 18:44:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:23.142 18:44:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:23.142 18:44:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:23.142 18:44:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:23.402 18:44:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:23.402 18:44:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:23.402 18:44:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:23.402 18:44:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:23.402 18:44:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:23.402 18:44:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:23.402 18:44:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:23.402 18:44:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:23.402 18:44:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:13:23.402 18:44:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 94018 00:13:23.402 18:44:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 94018 ']' 00:13:23.402 18:44:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 94018 00:13:23.402 18:44:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:13:23.402 18:44:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:23.402 18:44:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 94018 00:13:23.402 18:44:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:23.402 killing process with pid 94018 00:13:23.402 Received shutdown signal, test time was about 60.000000 seconds 00:13:23.402 00:13:23.402 Latency(us) 00:13:23.402 [2024-12-15T18:44:23.843Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:23.402 [2024-12-15T18:44:23.843Z] =================================================================================================================== 00:13:23.402 [2024-12-15T18:44:23.843Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:23.402 18:44:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:23.402 18:44:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 94018' 00:13:23.402 18:44:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 94018 00:13:23.402 [2024-12-15 18:44:23.769696] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:23.402 18:44:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 94018 00:13:23.402 [2024-12-15 18:44:23.811267] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:23.662 18:44:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:13:23.662 00:13:23.662 real 0m13.485s 00:13:23.662 user 0m16.931s 00:13:23.662 sys 0m1.898s 00:13:23.663 ************************************ 00:13:23.663 END TEST raid5f_rebuild_test 00:13:23.663 ************************************ 00:13:23.663 18:44:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:23.663 18:44:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.663 18:44:24 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false true 00:13:23.663 18:44:24 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:23.663 18:44:24 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:23.663 18:44:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:23.663 ************************************ 00:13:23.663 START TEST raid5f_rebuild_test_sb 00:13:23.663 ************************************ 00:13:23.663 18:44:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 true false true 00:13:23.663 18:44:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:13:23.663 18:44:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:13:23.663 18:44:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:13:23.663 18:44:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:13:23.663 18:44:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:23.663 18:44:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:23.663 18:44:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:23.663 18:44:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:23.663 18:44:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:23.663 18:44:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:23.663 18:44:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:23.663 18:44:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:23.663 18:44:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:23.663 18:44:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:13:23.663 18:44:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:23.663 18:44:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:23.663 18:44:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:23.663 18:44:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:23.663 18:44:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:23.663 18:44:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:23.663 18:44:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:23.663 18:44:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:23.663 18:44:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:23.663 18:44:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:13:23.663 18:44:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:13:23.663 18:44:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:13:23.663 18:44:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:13:23.663 18:44:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:13:23.663 18:44:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:13:23.923 18:44:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=94438 00:13:23.923 18:44:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:23.923 18:44:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 94438 00:13:23.923 18:44:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 94438 ']' 00:13:23.923 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:23.923 18:44:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:23.923 18:44:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:23.923 18:44:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:23.923 18:44:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:23.923 18:44:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:23.923 [2024-12-15 18:44:24.182429] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:13:23.923 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:23.923 Zero copy mechanism will not be used. 00:13:23.923 [2024-12-15 18:44:24.182605] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94438 ] 00:13:23.923 [2024-12-15 18:44:24.352956] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:24.182 [2024-12-15 18:44:24.378298] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:13:24.182 [2024-12-15 18:44:24.421284] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:24.182 [2024-12-15 18:44:24.421322] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:24.752 18:44:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:24.752 18:44:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:13:24.752 18:44:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:24.752 18:44:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:24.752 18:44:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.752 18:44:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.752 BaseBdev1_malloc 00:13:24.752 18:44:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.752 18:44:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:24.752 18:44:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.752 18:44:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.752 [2024-12-15 18:44:25.041140] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:24.752 [2024-12-15 18:44:25.041247] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:24.752 [2024-12-15 18:44:25.041293] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:24.752 [2024-12-15 18:44:25.041378] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:24.753 [2024-12-15 18:44:25.043527] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:24.753 [2024-12-15 18:44:25.043615] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:24.753 BaseBdev1 00:13:24.753 18:44:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.753 18:44:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:24.753 18:44:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:24.753 18:44:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.753 18:44:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.753 BaseBdev2_malloc 00:13:24.753 18:44:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.753 18:44:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:24.753 18:44:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.753 18:44:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.753 [2024-12-15 18:44:25.069814] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:24.753 [2024-12-15 18:44:25.069920] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:24.753 [2024-12-15 18:44:25.069960] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:24.753 [2024-12-15 18:44:25.069990] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:24.753 [2024-12-15 18:44:25.072013] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:24.753 [2024-12-15 18:44:25.072080] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:24.753 BaseBdev2 00:13:24.753 18:44:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.753 18:44:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:24.753 18:44:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:24.753 18:44:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.753 18:44:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.753 BaseBdev3_malloc 00:13:24.753 18:44:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.753 18:44:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:13:24.753 18:44:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.753 18:44:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.753 [2024-12-15 18:44:25.098390] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:13:24.753 [2024-12-15 18:44:25.098494] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:24.753 [2024-12-15 18:44:25.098536] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:24.753 [2024-12-15 18:44:25.098565] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:24.753 [2024-12-15 18:44:25.100619] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:24.753 [2024-12-15 18:44:25.100709] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:24.753 BaseBdev3 00:13:24.753 18:44:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.753 18:44:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:24.753 18:44:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.753 18:44:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.753 spare_malloc 00:13:24.753 18:44:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.753 18:44:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:24.753 18:44:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.753 18:44:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.753 spare_delay 00:13:24.753 18:44:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.753 18:44:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:24.753 18:44:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.753 18:44:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.753 [2024-12-15 18:44:25.156552] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:24.753 [2024-12-15 18:44:25.156673] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:24.753 [2024-12-15 18:44:25.156717] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:13:24.753 [2024-12-15 18:44:25.156770] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:24.753 [2024-12-15 18:44:25.159128] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:24.753 [2024-12-15 18:44:25.159222] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:24.753 spare 00:13:24.753 18:44:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.753 18:44:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:13:24.753 18:44:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.753 18:44:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.753 [2024-12-15 18:44:25.168582] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:24.753 [2024-12-15 18:44:25.170403] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:24.753 [2024-12-15 18:44:25.170506] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:24.753 [2024-12-15 18:44:25.170684] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:13:24.753 [2024-12-15 18:44:25.170733] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:24.753 [2024-12-15 18:44:25.170997] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:13:24.753 [2024-12-15 18:44:25.171429] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:13:24.753 [2024-12-15 18:44:25.171477] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:13:24.753 [2024-12-15 18:44:25.171629] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:24.753 18:44:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.753 18:44:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:13:24.753 18:44:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:24.753 18:44:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:24.753 18:44:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:24.753 18:44:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:24.753 18:44:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:24.753 18:44:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:24.753 18:44:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:24.753 18:44:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:24.753 18:44:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:24.753 18:44:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:24.753 18:44:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:24.753 18:44:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.753 18:44:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.013 18:44:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.013 18:44:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:25.013 "name": "raid_bdev1", 00:13:25.013 "uuid": "22d7a9b0-cca4-4aea-b1e7-53daaaf564c7", 00:13:25.013 "strip_size_kb": 64, 00:13:25.013 "state": "online", 00:13:25.013 "raid_level": "raid5f", 00:13:25.013 "superblock": true, 00:13:25.013 "num_base_bdevs": 3, 00:13:25.013 "num_base_bdevs_discovered": 3, 00:13:25.013 "num_base_bdevs_operational": 3, 00:13:25.013 "base_bdevs_list": [ 00:13:25.013 { 00:13:25.013 "name": "BaseBdev1", 00:13:25.013 "uuid": "765f3ff5-1a65-5d48-8db4-01f7fe4183c1", 00:13:25.013 "is_configured": true, 00:13:25.013 "data_offset": 2048, 00:13:25.013 "data_size": 63488 00:13:25.013 }, 00:13:25.013 { 00:13:25.013 "name": "BaseBdev2", 00:13:25.013 "uuid": "5625fda5-1e46-57cc-80f8-e4b3507d9545", 00:13:25.013 "is_configured": true, 00:13:25.013 "data_offset": 2048, 00:13:25.013 "data_size": 63488 00:13:25.013 }, 00:13:25.013 { 00:13:25.013 "name": "BaseBdev3", 00:13:25.013 "uuid": "4b774b6e-7cba-54fe-9ccd-3b472ce51c2e", 00:13:25.013 "is_configured": true, 00:13:25.013 "data_offset": 2048, 00:13:25.013 "data_size": 63488 00:13:25.013 } 00:13:25.013 ] 00:13:25.013 }' 00:13:25.013 18:44:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:25.013 18:44:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.273 18:44:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:25.273 18:44:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:25.273 18:44:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.273 18:44:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.273 [2024-12-15 18:44:25.612552] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:25.273 18:44:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.273 18:44:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=126976 00:13:25.273 18:44:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:25.273 18:44:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.273 18:44:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.273 18:44:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.273 18:44:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.273 18:44:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:13:25.273 18:44:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:13:25.273 18:44:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:13:25.273 18:44:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:13:25.273 18:44:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:13:25.273 18:44:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:25.273 18:44:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:13:25.273 18:44:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:25.273 18:44:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:25.273 18:44:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:25.273 18:44:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:13:25.273 18:44:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:25.273 18:44:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:25.273 18:44:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:13:25.533 [2024-12-15 18:44:25.880008] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:25.533 /dev/nbd0 00:13:25.533 18:44:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:25.533 18:44:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:25.533 18:44:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:25.533 18:44:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:13:25.533 18:44:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:25.533 18:44:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:25.533 18:44:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:25.533 18:44:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:13:25.533 18:44:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:25.533 18:44:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:25.533 18:44:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:25.533 1+0 records in 00:13:25.533 1+0 records out 00:13:25.533 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000411419 s, 10.0 MB/s 00:13:25.533 18:44:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:25.533 18:44:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:13:25.533 18:44:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:25.533 18:44:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:25.533 18:44:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:13:25.533 18:44:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:25.533 18:44:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:25.533 18:44:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:13:25.533 18:44:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:13:25.533 18:44:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 128 00:13:25.533 18:44:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:13:26.103 496+0 records in 00:13:26.103 496+0 records out 00:13:26.103 65011712 bytes (65 MB, 62 MiB) copied, 0.28979 s, 224 MB/s 00:13:26.103 18:44:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:26.103 18:44:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:26.103 18:44:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:26.103 18:44:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:26.103 18:44:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:13:26.103 18:44:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:26.103 18:44:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:26.103 18:44:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:26.103 [2024-12-15 18:44:26.469051] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:26.103 18:44:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:26.103 18:44:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:26.103 18:44:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:26.103 18:44:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:26.103 18:44:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:26.103 18:44:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:26.103 18:44:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:26.103 18:44:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:26.103 18:44:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.103 18:44:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.103 [2024-12-15 18:44:26.485128] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:26.103 18:44:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.103 18:44:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:13:26.103 18:44:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:26.103 18:44:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:26.103 18:44:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:26.103 18:44:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:26.103 18:44:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:26.103 18:44:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:26.103 18:44:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:26.103 18:44:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:26.103 18:44:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:26.103 18:44:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:26.103 18:44:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.103 18:44:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.103 18:44:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.103 18:44:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.103 18:44:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:26.103 "name": "raid_bdev1", 00:13:26.103 "uuid": "22d7a9b0-cca4-4aea-b1e7-53daaaf564c7", 00:13:26.103 "strip_size_kb": 64, 00:13:26.103 "state": "online", 00:13:26.103 "raid_level": "raid5f", 00:13:26.103 "superblock": true, 00:13:26.103 "num_base_bdevs": 3, 00:13:26.103 "num_base_bdevs_discovered": 2, 00:13:26.103 "num_base_bdevs_operational": 2, 00:13:26.103 "base_bdevs_list": [ 00:13:26.103 { 00:13:26.103 "name": null, 00:13:26.103 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:26.103 "is_configured": false, 00:13:26.103 "data_offset": 0, 00:13:26.103 "data_size": 63488 00:13:26.103 }, 00:13:26.103 { 00:13:26.103 "name": "BaseBdev2", 00:13:26.103 "uuid": "5625fda5-1e46-57cc-80f8-e4b3507d9545", 00:13:26.103 "is_configured": true, 00:13:26.103 "data_offset": 2048, 00:13:26.103 "data_size": 63488 00:13:26.103 }, 00:13:26.103 { 00:13:26.103 "name": "BaseBdev3", 00:13:26.103 "uuid": "4b774b6e-7cba-54fe-9ccd-3b472ce51c2e", 00:13:26.103 "is_configured": true, 00:13:26.103 "data_offset": 2048, 00:13:26.103 "data_size": 63488 00:13:26.103 } 00:13:26.103 ] 00:13:26.103 }' 00:13:26.103 18:44:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:26.103 18:44:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.673 18:44:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:26.673 18:44:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.673 18:44:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.673 [2024-12-15 18:44:26.940370] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:26.673 [2024-12-15 18:44:26.944982] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028de0 00:13:26.673 18:44:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.673 18:44:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:26.673 [2024-12-15 18:44:26.947152] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:27.615 18:44:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:27.615 18:44:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:27.615 18:44:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:27.615 18:44:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:27.615 18:44:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:27.615 18:44:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.615 18:44:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:27.615 18:44:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.615 18:44:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.615 18:44:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.615 18:44:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:27.615 "name": "raid_bdev1", 00:13:27.616 "uuid": "22d7a9b0-cca4-4aea-b1e7-53daaaf564c7", 00:13:27.616 "strip_size_kb": 64, 00:13:27.616 "state": "online", 00:13:27.616 "raid_level": "raid5f", 00:13:27.616 "superblock": true, 00:13:27.616 "num_base_bdevs": 3, 00:13:27.616 "num_base_bdevs_discovered": 3, 00:13:27.616 "num_base_bdevs_operational": 3, 00:13:27.616 "process": { 00:13:27.616 "type": "rebuild", 00:13:27.616 "target": "spare", 00:13:27.616 "progress": { 00:13:27.616 "blocks": 20480, 00:13:27.616 "percent": 16 00:13:27.616 } 00:13:27.616 }, 00:13:27.616 "base_bdevs_list": [ 00:13:27.616 { 00:13:27.616 "name": "spare", 00:13:27.616 "uuid": "ce171f7d-cec3-500e-a5fb-2185a5f42bb7", 00:13:27.616 "is_configured": true, 00:13:27.616 "data_offset": 2048, 00:13:27.616 "data_size": 63488 00:13:27.616 }, 00:13:27.616 { 00:13:27.616 "name": "BaseBdev2", 00:13:27.616 "uuid": "5625fda5-1e46-57cc-80f8-e4b3507d9545", 00:13:27.616 "is_configured": true, 00:13:27.616 "data_offset": 2048, 00:13:27.616 "data_size": 63488 00:13:27.616 }, 00:13:27.616 { 00:13:27.616 "name": "BaseBdev3", 00:13:27.616 "uuid": "4b774b6e-7cba-54fe-9ccd-3b472ce51c2e", 00:13:27.616 "is_configured": true, 00:13:27.616 "data_offset": 2048, 00:13:27.616 "data_size": 63488 00:13:27.616 } 00:13:27.616 ] 00:13:27.616 }' 00:13:27.616 18:44:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:27.875 18:44:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:27.875 18:44:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:27.875 18:44:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:27.875 18:44:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:27.875 18:44:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.875 18:44:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.875 [2024-12-15 18:44:28.115126] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:27.875 [2024-12-15 18:44:28.154667] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:27.875 [2024-12-15 18:44:28.154794] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:27.875 [2024-12-15 18:44:28.154842] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:27.875 [2024-12-15 18:44:28.154883] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:27.875 18:44:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.875 18:44:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:13:27.875 18:44:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:27.875 18:44:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:27.875 18:44:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:27.875 18:44:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:27.875 18:44:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:27.875 18:44:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:27.875 18:44:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:27.875 18:44:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:27.875 18:44:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:27.875 18:44:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.875 18:44:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:27.875 18:44:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.875 18:44:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.875 18:44:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.875 18:44:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:27.875 "name": "raid_bdev1", 00:13:27.875 "uuid": "22d7a9b0-cca4-4aea-b1e7-53daaaf564c7", 00:13:27.875 "strip_size_kb": 64, 00:13:27.875 "state": "online", 00:13:27.875 "raid_level": "raid5f", 00:13:27.875 "superblock": true, 00:13:27.875 "num_base_bdevs": 3, 00:13:27.875 "num_base_bdevs_discovered": 2, 00:13:27.875 "num_base_bdevs_operational": 2, 00:13:27.875 "base_bdevs_list": [ 00:13:27.875 { 00:13:27.875 "name": null, 00:13:27.875 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:27.875 "is_configured": false, 00:13:27.875 "data_offset": 0, 00:13:27.875 "data_size": 63488 00:13:27.875 }, 00:13:27.875 { 00:13:27.875 "name": "BaseBdev2", 00:13:27.875 "uuid": "5625fda5-1e46-57cc-80f8-e4b3507d9545", 00:13:27.875 "is_configured": true, 00:13:27.875 "data_offset": 2048, 00:13:27.875 "data_size": 63488 00:13:27.875 }, 00:13:27.875 { 00:13:27.875 "name": "BaseBdev3", 00:13:27.875 "uuid": "4b774b6e-7cba-54fe-9ccd-3b472ce51c2e", 00:13:27.875 "is_configured": true, 00:13:27.875 "data_offset": 2048, 00:13:27.875 "data_size": 63488 00:13:27.875 } 00:13:27.875 ] 00:13:27.875 }' 00:13:27.875 18:44:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:27.875 18:44:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.445 18:44:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:28.445 18:44:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:28.445 18:44:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:28.445 18:44:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:28.445 18:44:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:28.445 18:44:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:28.445 18:44:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.445 18:44:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.445 18:44:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.445 18:44:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.445 18:44:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:28.445 "name": "raid_bdev1", 00:13:28.445 "uuid": "22d7a9b0-cca4-4aea-b1e7-53daaaf564c7", 00:13:28.445 "strip_size_kb": 64, 00:13:28.445 "state": "online", 00:13:28.445 "raid_level": "raid5f", 00:13:28.445 "superblock": true, 00:13:28.445 "num_base_bdevs": 3, 00:13:28.445 "num_base_bdevs_discovered": 2, 00:13:28.445 "num_base_bdevs_operational": 2, 00:13:28.445 "base_bdevs_list": [ 00:13:28.445 { 00:13:28.445 "name": null, 00:13:28.445 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:28.445 "is_configured": false, 00:13:28.445 "data_offset": 0, 00:13:28.445 "data_size": 63488 00:13:28.445 }, 00:13:28.445 { 00:13:28.445 "name": "BaseBdev2", 00:13:28.445 "uuid": "5625fda5-1e46-57cc-80f8-e4b3507d9545", 00:13:28.445 "is_configured": true, 00:13:28.445 "data_offset": 2048, 00:13:28.445 "data_size": 63488 00:13:28.445 }, 00:13:28.445 { 00:13:28.445 "name": "BaseBdev3", 00:13:28.445 "uuid": "4b774b6e-7cba-54fe-9ccd-3b472ce51c2e", 00:13:28.445 "is_configured": true, 00:13:28.445 "data_offset": 2048, 00:13:28.445 "data_size": 63488 00:13:28.445 } 00:13:28.445 ] 00:13:28.445 }' 00:13:28.445 18:44:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:28.445 18:44:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:28.445 18:44:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:28.445 18:44:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:28.445 18:44:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:28.445 18:44:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.445 18:44:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.445 [2024-12-15 18:44:28.756078] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:28.445 [2024-12-15 18:44:28.760695] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028eb0 00:13:28.445 18:44:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.445 18:44:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:28.445 [2024-12-15 18:44:28.762786] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:29.385 18:44:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:29.385 18:44:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:29.385 18:44:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:29.385 18:44:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:29.385 18:44:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:29.385 18:44:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.385 18:44:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.385 18:44:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:29.385 18:44:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.385 18:44:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.385 18:44:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:29.385 "name": "raid_bdev1", 00:13:29.385 "uuid": "22d7a9b0-cca4-4aea-b1e7-53daaaf564c7", 00:13:29.385 "strip_size_kb": 64, 00:13:29.385 "state": "online", 00:13:29.385 "raid_level": "raid5f", 00:13:29.385 "superblock": true, 00:13:29.385 "num_base_bdevs": 3, 00:13:29.385 "num_base_bdevs_discovered": 3, 00:13:29.385 "num_base_bdevs_operational": 3, 00:13:29.385 "process": { 00:13:29.385 "type": "rebuild", 00:13:29.385 "target": "spare", 00:13:29.385 "progress": { 00:13:29.385 "blocks": 20480, 00:13:29.385 "percent": 16 00:13:29.385 } 00:13:29.385 }, 00:13:29.385 "base_bdevs_list": [ 00:13:29.385 { 00:13:29.385 "name": "spare", 00:13:29.385 "uuid": "ce171f7d-cec3-500e-a5fb-2185a5f42bb7", 00:13:29.385 "is_configured": true, 00:13:29.385 "data_offset": 2048, 00:13:29.385 "data_size": 63488 00:13:29.385 }, 00:13:29.385 { 00:13:29.385 "name": "BaseBdev2", 00:13:29.385 "uuid": "5625fda5-1e46-57cc-80f8-e4b3507d9545", 00:13:29.385 "is_configured": true, 00:13:29.385 "data_offset": 2048, 00:13:29.385 "data_size": 63488 00:13:29.385 }, 00:13:29.385 { 00:13:29.385 "name": "BaseBdev3", 00:13:29.385 "uuid": "4b774b6e-7cba-54fe-9ccd-3b472ce51c2e", 00:13:29.385 "is_configured": true, 00:13:29.385 "data_offset": 2048, 00:13:29.385 "data_size": 63488 00:13:29.385 } 00:13:29.385 ] 00:13:29.385 }' 00:13:29.385 18:44:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:29.645 18:44:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:29.645 18:44:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:29.645 18:44:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:29.645 18:44:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:13:29.645 18:44:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:13:29.645 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:13:29.645 18:44:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:13:29.645 18:44:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:13:29.645 18:44:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=466 00:13:29.645 18:44:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:29.645 18:44:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:29.645 18:44:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:29.645 18:44:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:29.645 18:44:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:29.645 18:44:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:29.645 18:44:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.645 18:44:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:29.645 18:44:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.645 18:44:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.645 18:44:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.645 18:44:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:29.645 "name": "raid_bdev1", 00:13:29.645 "uuid": "22d7a9b0-cca4-4aea-b1e7-53daaaf564c7", 00:13:29.645 "strip_size_kb": 64, 00:13:29.645 "state": "online", 00:13:29.645 "raid_level": "raid5f", 00:13:29.645 "superblock": true, 00:13:29.645 "num_base_bdevs": 3, 00:13:29.645 "num_base_bdevs_discovered": 3, 00:13:29.645 "num_base_bdevs_operational": 3, 00:13:29.645 "process": { 00:13:29.645 "type": "rebuild", 00:13:29.645 "target": "spare", 00:13:29.645 "progress": { 00:13:29.645 "blocks": 22528, 00:13:29.645 "percent": 17 00:13:29.645 } 00:13:29.645 }, 00:13:29.645 "base_bdevs_list": [ 00:13:29.645 { 00:13:29.645 "name": "spare", 00:13:29.645 "uuid": "ce171f7d-cec3-500e-a5fb-2185a5f42bb7", 00:13:29.645 "is_configured": true, 00:13:29.645 "data_offset": 2048, 00:13:29.645 "data_size": 63488 00:13:29.645 }, 00:13:29.645 { 00:13:29.645 "name": "BaseBdev2", 00:13:29.645 "uuid": "5625fda5-1e46-57cc-80f8-e4b3507d9545", 00:13:29.645 "is_configured": true, 00:13:29.645 "data_offset": 2048, 00:13:29.645 "data_size": 63488 00:13:29.645 }, 00:13:29.645 { 00:13:29.645 "name": "BaseBdev3", 00:13:29.645 "uuid": "4b774b6e-7cba-54fe-9ccd-3b472ce51c2e", 00:13:29.645 "is_configured": true, 00:13:29.645 "data_offset": 2048, 00:13:29.645 "data_size": 63488 00:13:29.645 } 00:13:29.645 ] 00:13:29.645 }' 00:13:29.645 18:44:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:29.645 18:44:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:29.645 18:44:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:29.645 18:44:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:29.645 18:44:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:31.025 18:44:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:31.025 18:44:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:31.025 18:44:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:31.025 18:44:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:31.025 18:44:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:31.025 18:44:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:31.025 18:44:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:31.025 18:44:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:31.025 18:44:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.025 18:44:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.025 18:44:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.025 18:44:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:31.025 "name": "raid_bdev1", 00:13:31.025 "uuid": "22d7a9b0-cca4-4aea-b1e7-53daaaf564c7", 00:13:31.025 "strip_size_kb": 64, 00:13:31.025 "state": "online", 00:13:31.025 "raid_level": "raid5f", 00:13:31.025 "superblock": true, 00:13:31.025 "num_base_bdevs": 3, 00:13:31.025 "num_base_bdevs_discovered": 3, 00:13:31.025 "num_base_bdevs_operational": 3, 00:13:31.025 "process": { 00:13:31.025 "type": "rebuild", 00:13:31.025 "target": "spare", 00:13:31.025 "progress": { 00:13:31.025 "blocks": 45056, 00:13:31.025 "percent": 35 00:13:31.025 } 00:13:31.025 }, 00:13:31.025 "base_bdevs_list": [ 00:13:31.025 { 00:13:31.025 "name": "spare", 00:13:31.025 "uuid": "ce171f7d-cec3-500e-a5fb-2185a5f42bb7", 00:13:31.025 "is_configured": true, 00:13:31.025 "data_offset": 2048, 00:13:31.025 "data_size": 63488 00:13:31.025 }, 00:13:31.025 { 00:13:31.025 "name": "BaseBdev2", 00:13:31.025 "uuid": "5625fda5-1e46-57cc-80f8-e4b3507d9545", 00:13:31.025 "is_configured": true, 00:13:31.025 "data_offset": 2048, 00:13:31.025 "data_size": 63488 00:13:31.025 }, 00:13:31.025 { 00:13:31.025 "name": "BaseBdev3", 00:13:31.025 "uuid": "4b774b6e-7cba-54fe-9ccd-3b472ce51c2e", 00:13:31.025 "is_configured": true, 00:13:31.025 "data_offset": 2048, 00:13:31.025 "data_size": 63488 00:13:31.025 } 00:13:31.025 ] 00:13:31.025 }' 00:13:31.025 18:44:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:31.025 18:44:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:31.025 18:44:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:31.025 18:44:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:31.025 18:44:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:31.965 18:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:31.965 18:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:31.965 18:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:31.965 18:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:31.965 18:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:31.965 18:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:31.965 18:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:31.965 18:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:31.965 18:44:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.965 18:44:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.965 18:44:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.965 18:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:31.965 "name": "raid_bdev1", 00:13:31.965 "uuid": "22d7a9b0-cca4-4aea-b1e7-53daaaf564c7", 00:13:31.965 "strip_size_kb": 64, 00:13:31.965 "state": "online", 00:13:31.965 "raid_level": "raid5f", 00:13:31.965 "superblock": true, 00:13:31.965 "num_base_bdevs": 3, 00:13:31.965 "num_base_bdevs_discovered": 3, 00:13:31.965 "num_base_bdevs_operational": 3, 00:13:31.965 "process": { 00:13:31.965 "type": "rebuild", 00:13:31.965 "target": "spare", 00:13:31.965 "progress": { 00:13:31.965 "blocks": 69632, 00:13:31.965 "percent": 54 00:13:31.965 } 00:13:31.965 }, 00:13:31.965 "base_bdevs_list": [ 00:13:31.965 { 00:13:31.965 "name": "spare", 00:13:31.965 "uuid": "ce171f7d-cec3-500e-a5fb-2185a5f42bb7", 00:13:31.965 "is_configured": true, 00:13:31.965 "data_offset": 2048, 00:13:31.965 "data_size": 63488 00:13:31.965 }, 00:13:31.965 { 00:13:31.965 "name": "BaseBdev2", 00:13:31.965 "uuid": "5625fda5-1e46-57cc-80f8-e4b3507d9545", 00:13:31.965 "is_configured": true, 00:13:31.965 "data_offset": 2048, 00:13:31.965 "data_size": 63488 00:13:31.965 }, 00:13:31.965 { 00:13:31.965 "name": "BaseBdev3", 00:13:31.965 "uuid": "4b774b6e-7cba-54fe-9ccd-3b472ce51c2e", 00:13:31.965 "is_configured": true, 00:13:31.965 "data_offset": 2048, 00:13:31.965 "data_size": 63488 00:13:31.965 } 00:13:31.965 ] 00:13:31.965 }' 00:13:31.965 18:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:31.965 18:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:31.965 18:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:31.965 18:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:31.965 18:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:33.346 18:44:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:33.346 18:44:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:33.346 18:44:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:33.346 18:44:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:33.346 18:44:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:33.346 18:44:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:33.346 18:44:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:33.346 18:44:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.346 18:44:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:33.346 18:44:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.346 18:44:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.346 18:44:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:33.346 "name": "raid_bdev1", 00:13:33.346 "uuid": "22d7a9b0-cca4-4aea-b1e7-53daaaf564c7", 00:13:33.346 "strip_size_kb": 64, 00:13:33.346 "state": "online", 00:13:33.346 "raid_level": "raid5f", 00:13:33.346 "superblock": true, 00:13:33.346 "num_base_bdevs": 3, 00:13:33.346 "num_base_bdevs_discovered": 3, 00:13:33.346 "num_base_bdevs_operational": 3, 00:13:33.346 "process": { 00:13:33.346 "type": "rebuild", 00:13:33.346 "target": "spare", 00:13:33.346 "progress": { 00:13:33.346 "blocks": 94208, 00:13:33.346 "percent": 74 00:13:33.346 } 00:13:33.346 }, 00:13:33.346 "base_bdevs_list": [ 00:13:33.346 { 00:13:33.346 "name": "spare", 00:13:33.346 "uuid": "ce171f7d-cec3-500e-a5fb-2185a5f42bb7", 00:13:33.346 "is_configured": true, 00:13:33.346 "data_offset": 2048, 00:13:33.346 "data_size": 63488 00:13:33.346 }, 00:13:33.346 { 00:13:33.346 "name": "BaseBdev2", 00:13:33.346 "uuid": "5625fda5-1e46-57cc-80f8-e4b3507d9545", 00:13:33.346 "is_configured": true, 00:13:33.346 "data_offset": 2048, 00:13:33.346 "data_size": 63488 00:13:33.346 }, 00:13:33.346 { 00:13:33.346 "name": "BaseBdev3", 00:13:33.346 "uuid": "4b774b6e-7cba-54fe-9ccd-3b472ce51c2e", 00:13:33.346 "is_configured": true, 00:13:33.346 "data_offset": 2048, 00:13:33.346 "data_size": 63488 00:13:33.346 } 00:13:33.346 ] 00:13:33.346 }' 00:13:33.346 18:44:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:33.346 18:44:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:33.346 18:44:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:33.346 18:44:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:33.346 18:44:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:34.286 18:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:34.286 18:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:34.286 18:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:34.286 18:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:34.286 18:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:34.286 18:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:34.286 18:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:34.286 18:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:34.286 18:44:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.286 18:44:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.286 18:44:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.286 18:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:34.286 "name": "raid_bdev1", 00:13:34.286 "uuid": "22d7a9b0-cca4-4aea-b1e7-53daaaf564c7", 00:13:34.286 "strip_size_kb": 64, 00:13:34.286 "state": "online", 00:13:34.286 "raid_level": "raid5f", 00:13:34.286 "superblock": true, 00:13:34.286 "num_base_bdevs": 3, 00:13:34.286 "num_base_bdevs_discovered": 3, 00:13:34.286 "num_base_bdevs_operational": 3, 00:13:34.286 "process": { 00:13:34.286 "type": "rebuild", 00:13:34.286 "target": "spare", 00:13:34.286 "progress": { 00:13:34.286 "blocks": 116736, 00:13:34.286 "percent": 91 00:13:34.286 } 00:13:34.286 }, 00:13:34.286 "base_bdevs_list": [ 00:13:34.286 { 00:13:34.286 "name": "spare", 00:13:34.286 "uuid": "ce171f7d-cec3-500e-a5fb-2185a5f42bb7", 00:13:34.286 "is_configured": true, 00:13:34.286 "data_offset": 2048, 00:13:34.286 "data_size": 63488 00:13:34.286 }, 00:13:34.286 { 00:13:34.286 "name": "BaseBdev2", 00:13:34.286 "uuid": "5625fda5-1e46-57cc-80f8-e4b3507d9545", 00:13:34.286 "is_configured": true, 00:13:34.286 "data_offset": 2048, 00:13:34.286 "data_size": 63488 00:13:34.286 }, 00:13:34.286 { 00:13:34.286 "name": "BaseBdev3", 00:13:34.286 "uuid": "4b774b6e-7cba-54fe-9ccd-3b472ce51c2e", 00:13:34.286 "is_configured": true, 00:13:34.286 "data_offset": 2048, 00:13:34.286 "data_size": 63488 00:13:34.286 } 00:13:34.286 ] 00:13:34.286 }' 00:13:34.286 18:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:34.286 18:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:34.286 18:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:34.286 18:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:34.286 18:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:34.855 [2024-12-15 18:44:34.999289] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:34.855 [2024-12-15 18:44:34.999360] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:34.855 [2024-12-15 18:44:34.999474] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:35.423 18:44:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:35.423 18:44:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:35.423 18:44:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:35.423 18:44:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:35.423 18:44:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:35.423 18:44:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:35.423 18:44:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.423 18:44:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.423 18:44:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:35.423 18:44:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.423 18:44:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.423 18:44:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:35.423 "name": "raid_bdev1", 00:13:35.423 "uuid": "22d7a9b0-cca4-4aea-b1e7-53daaaf564c7", 00:13:35.423 "strip_size_kb": 64, 00:13:35.423 "state": "online", 00:13:35.423 "raid_level": "raid5f", 00:13:35.423 "superblock": true, 00:13:35.423 "num_base_bdevs": 3, 00:13:35.423 "num_base_bdevs_discovered": 3, 00:13:35.423 "num_base_bdevs_operational": 3, 00:13:35.423 "base_bdevs_list": [ 00:13:35.423 { 00:13:35.423 "name": "spare", 00:13:35.423 "uuid": "ce171f7d-cec3-500e-a5fb-2185a5f42bb7", 00:13:35.423 "is_configured": true, 00:13:35.423 "data_offset": 2048, 00:13:35.423 "data_size": 63488 00:13:35.423 }, 00:13:35.423 { 00:13:35.423 "name": "BaseBdev2", 00:13:35.423 "uuid": "5625fda5-1e46-57cc-80f8-e4b3507d9545", 00:13:35.423 "is_configured": true, 00:13:35.423 "data_offset": 2048, 00:13:35.423 "data_size": 63488 00:13:35.423 }, 00:13:35.423 { 00:13:35.423 "name": "BaseBdev3", 00:13:35.423 "uuid": "4b774b6e-7cba-54fe-9ccd-3b472ce51c2e", 00:13:35.423 "is_configured": true, 00:13:35.423 "data_offset": 2048, 00:13:35.423 "data_size": 63488 00:13:35.423 } 00:13:35.423 ] 00:13:35.423 }' 00:13:35.423 18:44:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:35.423 18:44:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:35.423 18:44:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:35.423 18:44:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:35.423 18:44:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:13:35.423 18:44:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:35.423 18:44:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:35.423 18:44:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:35.423 18:44:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:35.423 18:44:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:35.423 18:44:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.423 18:44:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:35.423 18:44:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.423 18:44:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.423 18:44:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.423 18:44:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:35.423 "name": "raid_bdev1", 00:13:35.423 "uuid": "22d7a9b0-cca4-4aea-b1e7-53daaaf564c7", 00:13:35.423 "strip_size_kb": 64, 00:13:35.423 "state": "online", 00:13:35.423 "raid_level": "raid5f", 00:13:35.423 "superblock": true, 00:13:35.423 "num_base_bdevs": 3, 00:13:35.423 "num_base_bdevs_discovered": 3, 00:13:35.423 "num_base_bdevs_operational": 3, 00:13:35.423 "base_bdevs_list": [ 00:13:35.423 { 00:13:35.423 "name": "spare", 00:13:35.423 "uuid": "ce171f7d-cec3-500e-a5fb-2185a5f42bb7", 00:13:35.423 "is_configured": true, 00:13:35.423 "data_offset": 2048, 00:13:35.423 "data_size": 63488 00:13:35.423 }, 00:13:35.423 { 00:13:35.423 "name": "BaseBdev2", 00:13:35.423 "uuid": "5625fda5-1e46-57cc-80f8-e4b3507d9545", 00:13:35.423 "is_configured": true, 00:13:35.423 "data_offset": 2048, 00:13:35.423 "data_size": 63488 00:13:35.423 }, 00:13:35.423 { 00:13:35.423 "name": "BaseBdev3", 00:13:35.423 "uuid": "4b774b6e-7cba-54fe-9ccd-3b472ce51c2e", 00:13:35.423 "is_configured": true, 00:13:35.423 "data_offset": 2048, 00:13:35.423 "data_size": 63488 00:13:35.423 } 00:13:35.423 ] 00:13:35.423 }' 00:13:35.423 18:44:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:35.683 18:44:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:35.683 18:44:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:35.683 18:44:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:35.683 18:44:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:13:35.683 18:44:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:35.683 18:44:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:35.683 18:44:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:35.683 18:44:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:35.683 18:44:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:35.683 18:44:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:35.683 18:44:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:35.683 18:44:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:35.683 18:44:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:35.683 18:44:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.683 18:44:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.683 18:44:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:35.683 18:44:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.683 18:44:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.683 18:44:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:35.683 "name": "raid_bdev1", 00:13:35.683 "uuid": "22d7a9b0-cca4-4aea-b1e7-53daaaf564c7", 00:13:35.683 "strip_size_kb": 64, 00:13:35.683 "state": "online", 00:13:35.683 "raid_level": "raid5f", 00:13:35.683 "superblock": true, 00:13:35.683 "num_base_bdevs": 3, 00:13:35.683 "num_base_bdevs_discovered": 3, 00:13:35.683 "num_base_bdevs_operational": 3, 00:13:35.683 "base_bdevs_list": [ 00:13:35.683 { 00:13:35.683 "name": "spare", 00:13:35.683 "uuid": "ce171f7d-cec3-500e-a5fb-2185a5f42bb7", 00:13:35.683 "is_configured": true, 00:13:35.683 "data_offset": 2048, 00:13:35.683 "data_size": 63488 00:13:35.683 }, 00:13:35.683 { 00:13:35.683 "name": "BaseBdev2", 00:13:35.683 "uuid": "5625fda5-1e46-57cc-80f8-e4b3507d9545", 00:13:35.683 "is_configured": true, 00:13:35.683 "data_offset": 2048, 00:13:35.683 "data_size": 63488 00:13:35.683 }, 00:13:35.683 { 00:13:35.683 "name": "BaseBdev3", 00:13:35.683 "uuid": "4b774b6e-7cba-54fe-9ccd-3b472ce51c2e", 00:13:35.683 "is_configured": true, 00:13:35.683 "data_offset": 2048, 00:13:35.683 "data_size": 63488 00:13:35.683 } 00:13:35.683 ] 00:13:35.683 }' 00:13:35.683 18:44:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:35.683 18:44:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.253 18:44:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:36.253 18:44:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.253 18:44:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.253 [2024-12-15 18:44:36.390742] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:36.253 [2024-12-15 18:44:36.390848] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:36.253 [2024-12-15 18:44:36.390956] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:36.253 [2024-12-15 18:44:36.391075] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:36.253 [2024-12-15 18:44:36.391129] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:13:36.253 18:44:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.253 18:44:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.253 18:44:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:13:36.253 18:44:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.253 18:44:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.253 18:44:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.253 18:44:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:36.253 18:44:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:36.253 18:44:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:13:36.253 18:44:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:13:36.253 18:44:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:36.253 18:44:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:13:36.253 18:44:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:36.253 18:44:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:36.253 18:44:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:36.253 18:44:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:13:36.253 18:44:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:36.253 18:44:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:36.253 18:44:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:13:36.253 /dev/nbd0 00:13:36.253 18:44:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:36.253 18:44:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:36.253 18:44:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:36.253 18:44:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:13:36.253 18:44:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:36.253 18:44:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:36.253 18:44:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:36.253 18:44:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:13:36.253 18:44:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:36.253 18:44:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:36.253 18:44:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:36.253 1+0 records in 00:13:36.253 1+0 records out 00:13:36.253 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000604572 s, 6.8 MB/s 00:13:36.253 18:44:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:36.253 18:44:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:13:36.253 18:44:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:36.253 18:44:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:36.253 18:44:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:13:36.253 18:44:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:36.253 18:44:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:36.253 18:44:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:13:36.513 /dev/nbd1 00:13:36.513 18:44:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:36.513 18:44:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:36.514 18:44:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:36.514 18:44:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:13:36.514 18:44:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:36.514 18:44:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:36.514 18:44:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:36.514 18:44:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:13:36.514 18:44:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:36.514 18:44:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:36.514 18:44:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:36.514 1+0 records in 00:13:36.514 1+0 records out 00:13:36.514 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0003221 s, 12.7 MB/s 00:13:36.514 18:44:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:36.514 18:44:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:13:36.514 18:44:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:36.514 18:44:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:36.514 18:44:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:13:36.514 18:44:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:36.514 18:44:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:36.514 18:44:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:13:36.774 18:44:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:13:36.774 18:44:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:36.774 18:44:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:36.774 18:44:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:36.774 18:44:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:13:36.774 18:44:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:36.774 18:44:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:36.774 18:44:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:36.774 18:44:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:36.774 18:44:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:36.774 18:44:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:36.774 18:44:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:36.774 18:44:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:36.774 18:44:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:36.774 18:44:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:36.774 18:44:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:36.774 18:44:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:37.034 18:44:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:37.034 18:44:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:37.034 18:44:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:37.034 18:44:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:37.034 18:44:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:37.034 18:44:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:37.034 18:44:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:37.034 18:44:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:37.034 18:44:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:13:37.034 18:44:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:13:37.034 18:44:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.034 18:44:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.034 18:44:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.034 18:44:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:37.034 18:44:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.034 18:44:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.034 [2024-12-15 18:44:37.440935] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:37.034 [2024-12-15 18:44:37.441036] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:37.034 [2024-12-15 18:44:37.441064] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:13:37.034 [2024-12-15 18:44:37.441073] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:37.034 [2024-12-15 18:44:37.443213] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:37.034 [2024-12-15 18:44:37.443302] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:37.034 [2024-12-15 18:44:37.443394] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:37.034 [2024-12-15 18:44:37.443435] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:37.034 [2024-12-15 18:44:37.443538] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:37.034 [2024-12-15 18:44:37.443635] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:37.034 spare 00:13:37.034 18:44:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.034 18:44:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:13:37.034 18:44:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.034 18:44:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.294 [2024-12-15 18:44:37.543528] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:13:37.294 [2024-12-15 18:44:37.543601] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:37.294 [2024-12-15 18:44:37.543891] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047560 00:13:37.294 [2024-12-15 18:44:37.544337] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:13:37.294 [2024-12-15 18:44:37.544392] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006600 00:13:37.294 [2024-12-15 18:44:37.544574] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:37.294 18:44:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.294 18:44:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:13:37.294 18:44:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:37.294 18:44:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:37.294 18:44:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:37.294 18:44:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:37.294 18:44:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:37.294 18:44:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:37.294 18:44:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:37.294 18:44:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:37.294 18:44:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:37.294 18:44:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.294 18:44:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:37.294 18:44:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.294 18:44:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.294 18:44:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.294 18:44:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:37.294 "name": "raid_bdev1", 00:13:37.294 "uuid": "22d7a9b0-cca4-4aea-b1e7-53daaaf564c7", 00:13:37.294 "strip_size_kb": 64, 00:13:37.294 "state": "online", 00:13:37.294 "raid_level": "raid5f", 00:13:37.294 "superblock": true, 00:13:37.294 "num_base_bdevs": 3, 00:13:37.294 "num_base_bdevs_discovered": 3, 00:13:37.294 "num_base_bdevs_operational": 3, 00:13:37.294 "base_bdevs_list": [ 00:13:37.294 { 00:13:37.294 "name": "spare", 00:13:37.294 "uuid": "ce171f7d-cec3-500e-a5fb-2185a5f42bb7", 00:13:37.294 "is_configured": true, 00:13:37.294 "data_offset": 2048, 00:13:37.294 "data_size": 63488 00:13:37.294 }, 00:13:37.294 { 00:13:37.294 "name": "BaseBdev2", 00:13:37.294 "uuid": "5625fda5-1e46-57cc-80f8-e4b3507d9545", 00:13:37.294 "is_configured": true, 00:13:37.294 "data_offset": 2048, 00:13:37.294 "data_size": 63488 00:13:37.294 }, 00:13:37.294 { 00:13:37.294 "name": "BaseBdev3", 00:13:37.294 "uuid": "4b774b6e-7cba-54fe-9ccd-3b472ce51c2e", 00:13:37.294 "is_configured": true, 00:13:37.294 "data_offset": 2048, 00:13:37.294 "data_size": 63488 00:13:37.294 } 00:13:37.294 ] 00:13:37.294 }' 00:13:37.294 18:44:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:37.294 18:44:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.877 18:44:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:37.877 18:44:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:37.877 18:44:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:37.877 18:44:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:37.877 18:44:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:37.877 18:44:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:37.877 18:44:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.877 18:44:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.877 18:44:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.877 18:44:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.877 18:44:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:37.877 "name": "raid_bdev1", 00:13:37.877 "uuid": "22d7a9b0-cca4-4aea-b1e7-53daaaf564c7", 00:13:37.877 "strip_size_kb": 64, 00:13:37.877 "state": "online", 00:13:37.877 "raid_level": "raid5f", 00:13:37.877 "superblock": true, 00:13:37.877 "num_base_bdevs": 3, 00:13:37.877 "num_base_bdevs_discovered": 3, 00:13:37.877 "num_base_bdevs_operational": 3, 00:13:37.877 "base_bdevs_list": [ 00:13:37.877 { 00:13:37.877 "name": "spare", 00:13:37.877 "uuid": "ce171f7d-cec3-500e-a5fb-2185a5f42bb7", 00:13:37.877 "is_configured": true, 00:13:37.877 "data_offset": 2048, 00:13:37.877 "data_size": 63488 00:13:37.877 }, 00:13:37.877 { 00:13:37.877 "name": "BaseBdev2", 00:13:37.877 "uuid": "5625fda5-1e46-57cc-80f8-e4b3507d9545", 00:13:37.877 "is_configured": true, 00:13:37.877 "data_offset": 2048, 00:13:37.877 "data_size": 63488 00:13:37.877 }, 00:13:37.877 { 00:13:37.877 "name": "BaseBdev3", 00:13:37.877 "uuid": "4b774b6e-7cba-54fe-9ccd-3b472ce51c2e", 00:13:37.877 "is_configured": true, 00:13:37.877 "data_offset": 2048, 00:13:37.877 "data_size": 63488 00:13:37.877 } 00:13:37.877 ] 00:13:37.877 }' 00:13:37.877 18:44:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:37.877 18:44:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:37.877 18:44:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:37.877 18:44:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:37.877 18:44:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.877 18:44:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:13:37.877 18:44:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.877 18:44:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.877 18:44:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.877 18:44:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:13:37.877 18:44:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:37.877 18:44:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.877 18:44:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.877 [2024-12-15 18:44:38.192740] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:37.877 18:44:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.877 18:44:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:13:37.877 18:44:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:37.877 18:44:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:37.877 18:44:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:37.877 18:44:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:37.877 18:44:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:37.877 18:44:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:37.877 18:44:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:37.877 18:44:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:37.877 18:44:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:37.877 18:44:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:37.877 18:44:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.877 18:44:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.877 18:44:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.877 18:44:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.877 18:44:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:37.877 "name": "raid_bdev1", 00:13:37.877 "uuid": "22d7a9b0-cca4-4aea-b1e7-53daaaf564c7", 00:13:37.877 "strip_size_kb": 64, 00:13:37.877 "state": "online", 00:13:37.877 "raid_level": "raid5f", 00:13:37.877 "superblock": true, 00:13:37.877 "num_base_bdevs": 3, 00:13:37.877 "num_base_bdevs_discovered": 2, 00:13:37.877 "num_base_bdevs_operational": 2, 00:13:37.877 "base_bdevs_list": [ 00:13:37.877 { 00:13:37.877 "name": null, 00:13:37.877 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:37.877 "is_configured": false, 00:13:37.877 "data_offset": 0, 00:13:37.877 "data_size": 63488 00:13:37.877 }, 00:13:37.877 { 00:13:37.877 "name": "BaseBdev2", 00:13:37.877 "uuid": "5625fda5-1e46-57cc-80f8-e4b3507d9545", 00:13:37.877 "is_configured": true, 00:13:37.877 "data_offset": 2048, 00:13:37.877 "data_size": 63488 00:13:37.877 }, 00:13:37.877 { 00:13:37.877 "name": "BaseBdev3", 00:13:37.877 "uuid": "4b774b6e-7cba-54fe-9ccd-3b472ce51c2e", 00:13:37.877 "is_configured": true, 00:13:37.877 "data_offset": 2048, 00:13:37.877 "data_size": 63488 00:13:37.877 } 00:13:37.877 ] 00:13:37.877 }' 00:13:37.877 18:44:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:37.877 18:44:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.463 18:44:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:38.463 18:44:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.463 18:44:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.463 [2024-12-15 18:44:38.624095] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:38.463 [2024-12-15 18:44:38.624274] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:13:38.463 [2024-12-15 18:44:38.624292] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:38.463 [2024-12-15 18:44:38.624330] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:38.463 [2024-12-15 18:44:38.628679] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047630 00:13:38.464 18:44:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.464 18:44:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:13:38.464 [2024-12-15 18:44:38.630798] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:39.403 18:44:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:39.403 18:44:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:39.403 18:44:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:39.403 18:44:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:39.403 18:44:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:39.403 18:44:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.403 18:44:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:39.403 18:44:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.403 18:44:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:39.403 18:44:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.403 18:44:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:39.403 "name": "raid_bdev1", 00:13:39.403 "uuid": "22d7a9b0-cca4-4aea-b1e7-53daaaf564c7", 00:13:39.403 "strip_size_kb": 64, 00:13:39.403 "state": "online", 00:13:39.403 "raid_level": "raid5f", 00:13:39.403 "superblock": true, 00:13:39.403 "num_base_bdevs": 3, 00:13:39.403 "num_base_bdevs_discovered": 3, 00:13:39.403 "num_base_bdevs_operational": 3, 00:13:39.403 "process": { 00:13:39.403 "type": "rebuild", 00:13:39.403 "target": "spare", 00:13:39.403 "progress": { 00:13:39.403 "blocks": 20480, 00:13:39.403 "percent": 16 00:13:39.403 } 00:13:39.403 }, 00:13:39.403 "base_bdevs_list": [ 00:13:39.403 { 00:13:39.403 "name": "spare", 00:13:39.403 "uuid": "ce171f7d-cec3-500e-a5fb-2185a5f42bb7", 00:13:39.403 "is_configured": true, 00:13:39.403 "data_offset": 2048, 00:13:39.403 "data_size": 63488 00:13:39.403 }, 00:13:39.403 { 00:13:39.403 "name": "BaseBdev2", 00:13:39.403 "uuid": "5625fda5-1e46-57cc-80f8-e4b3507d9545", 00:13:39.403 "is_configured": true, 00:13:39.403 "data_offset": 2048, 00:13:39.403 "data_size": 63488 00:13:39.403 }, 00:13:39.403 { 00:13:39.403 "name": "BaseBdev3", 00:13:39.403 "uuid": "4b774b6e-7cba-54fe-9ccd-3b472ce51c2e", 00:13:39.403 "is_configured": true, 00:13:39.403 "data_offset": 2048, 00:13:39.403 "data_size": 63488 00:13:39.403 } 00:13:39.403 ] 00:13:39.403 }' 00:13:39.403 18:44:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:39.403 18:44:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:39.403 18:44:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:39.403 18:44:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:39.403 18:44:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:13:39.403 18:44:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.403 18:44:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:39.403 [2024-12-15 18:44:39.790799] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:39.403 [2024-12-15 18:44:39.837687] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:39.403 [2024-12-15 18:44:39.837819] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:39.403 [2024-12-15 18:44:39.837878] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:39.403 [2024-12-15 18:44:39.837900] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:39.663 18:44:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.663 18:44:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:13:39.663 18:44:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:39.663 18:44:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:39.663 18:44:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:39.663 18:44:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:39.663 18:44:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:39.663 18:44:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:39.663 18:44:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:39.663 18:44:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:39.663 18:44:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:39.663 18:44:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.663 18:44:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.663 18:44:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:39.663 18:44:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:39.663 18:44:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.663 18:44:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:39.663 "name": "raid_bdev1", 00:13:39.663 "uuid": "22d7a9b0-cca4-4aea-b1e7-53daaaf564c7", 00:13:39.663 "strip_size_kb": 64, 00:13:39.663 "state": "online", 00:13:39.663 "raid_level": "raid5f", 00:13:39.663 "superblock": true, 00:13:39.663 "num_base_bdevs": 3, 00:13:39.663 "num_base_bdevs_discovered": 2, 00:13:39.663 "num_base_bdevs_operational": 2, 00:13:39.663 "base_bdevs_list": [ 00:13:39.663 { 00:13:39.663 "name": null, 00:13:39.663 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:39.663 "is_configured": false, 00:13:39.663 "data_offset": 0, 00:13:39.663 "data_size": 63488 00:13:39.663 }, 00:13:39.663 { 00:13:39.663 "name": "BaseBdev2", 00:13:39.663 "uuid": "5625fda5-1e46-57cc-80f8-e4b3507d9545", 00:13:39.663 "is_configured": true, 00:13:39.663 "data_offset": 2048, 00:13:39.663 "data_size": 63488 00:13:39.663 }, 00:13:39.663 { 00:13:39.663 "name": "BaseBdev3", 00:13:39.663 "uuid": "4b774b6e-7cba-54fe-9ccd-3b472ce51c2e", 00:13:39.663 "is_configured": true, 00:13:39.663 "data_offset": 2048, 00:13:39.663 "data_size": 63488 00:13:39.663 } 00:13:39.663 ] 00:13:39.663 }' 00:13:39.663 18:44:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:39.663 18:44:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:39.923 18:44:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:39.923 18:44:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.923 18:44:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:39.923 [2024-12-15 18:44:40.338539] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:39.923 [2024-12-15 18:44:40.338653] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:39.923 [2024-12-15 18:44:40.338692] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:13:39.923 [2024-12-15 18:44:40.338719] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:39.923 [2024-12-15 18:44:40.339172] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:39.923 [2024-12-15 18:44:40.339233] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:39.923 [2024-12-15 18:44:40.339339] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:39.923 [2024-12-15 18:44:40.339379] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:13:39.923 [2024-12-15 18:44:40.339426] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:39.923 [2024-12-15 18:44:40.339483] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:39.923 [2024-12-15 18:44:40.343558] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047700 00:13:39.923 spare 00:13:39.923 18:44:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.923 18:44:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:13:39.923 [2024-12-15 18:44:40.345661] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:41.303 18:44:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:41.303 18:44:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:41.303 18:44:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:41.303 18:44:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:41.303 18:44:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:41.303 18:44:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.303 18:44:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:41.303 18:44:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.303 18:44:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:41.303 18:44:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.303 18:44:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:41.303 "name": "raid_bdev1", 00:13:41.303 "uuid": "22d7a9b0-cca4-4aea-b1e7-53daaaf564c7", 00:13:41.303 "strip_size_kb": 64, 00:13:41.303 "state": "online", 00:13:41.303 "raid_level": "raid5f", 00:13:41.303 "superblock": true, 00:13:41.303 "num_base_bdevs": 3, 00:13:41.303 "num_base_bdevs_discovered": 3, 00:13:41.303 "num_base_bdevs_operational": 3, 00:13:41.303 "process": { 00:13:41.303 "type": "rebuild", 00:13:41.303 "target": "spare", 00:13:41.303 "progress": { 00:13:41.303 "blocks": 20480, 00:13:41.303 "percent": 16 00:13:41.303 } 00:13:41.303 }, 00:13:41.303 "base_bdevs_list": [ 00:13:41.303 { 00:13:41.303 "name": "spare", 00:13:41.303 "uuid": "ce171f7d-cec3-500e-a5fb-2185a5f42bb7", 00:13:41.303 "is_configured": true, 00:13:41.303 "data_offset": 2048, 00:13:41.303 "data_size": 63488 00:13:41.303 }, 00:13:41.303 { 00:13:41.303 "name": "BaseBdev2", 00:13:41.303 "uuid": "5625fda5-1e46-57cc-80f8-e4b3507d9545", 00:13:41.303 "is_configured": true, 00:13:41.303 "data_offset": 2048, 00:13:41.303 "data_size": 63488 00:13:41.303 }, 00:13:41.303 { 00:13:41.303 "name": "BaseBdev3", 00:13:41.303 "uuid": "4b774b6e-7cba-54fe-9ccd-3b472ce51c2e", 00:13:41.303 "is_configured": true, 00:13:41.303 "data_offset": 2048, 00:13:41.303 "data_size": 63488 00:13:41.303 } 00:13:41.303 ] 00:13:41.303 }' 00:13:41.303 18:44:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:41.303 18:44:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:41.303 18:44:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:41.303 18:44:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:41.303 18:44:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:13:41.303 18:44:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.303 18:44:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:41.303 [2024-12-15 18:44:41.501819] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:41.303 [2024-12-15 18:44:41.552588] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:41.303 [2024-12-15 18:44:41.552725] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:41.303 [2024-12-15 18:44:41.552765] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:41.303 [2024-12-15 18:44:41.552793] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:41.303 18:44:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.303 18:44:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:13:41.303 18:44:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:41.303 18:44:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:41.303 18:44:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:41.303 18:44:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:41.303 18:44:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:41.303 18:44:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:41.303 18:44:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:41.303 18:44:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:41.303 18:44:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:41.303 18:44:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.303 18:44:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:41.303 18:44:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.303 18:44:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:41.303 18:44:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.303 18:44:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:41.303 "name": "raid_bdev1", 00:13:41.303 "uuid": "22d7a9b0-cca4-4aea-b1e7-53daaaf564c7", 00:13:41.303 "strip_size_kb": 64, 00:13:41.303 "state": "online", 00:13:41.303 "raid_level": "raid5f", 00:13:41.303 "superblock": true, 00:13:41.303 "num_base_bdevs": 3, 00:13:41.303 "num_base_bdevs_discovered": 2, 00:13:41.303 "num_base_bdevs_operational": 2, 00:13:41.303 "base_bdevs_list": [ 00:13:41.303 { 00:13:41.303 "name": null, 00:13:41.303 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:41.303 "is_configured": false, 00:13:41.303 "data_offset": 0, 00:13:41.303 "data_size": 63488 00:13:41.303 }, 00:13:41.303 { 00:13:41.303 "name": "BaseBdev2", 00:13:41.303 "uuid": "5625fda5-1e46-57cc-80f8-e4b3507d9545", 00:13:41.303 "is_configured": true, 00:13:41.303 "data_offset": 2048, 00:13:41.303 "data_size": 63488 00:13:41.303 }, 00:13:41.303 { 00:13:41.303 "name": "BaseBdev3", 00:13:41.303 "uuid": "4b774b6e-7cba-54fe-9ccd-3b472ce51c2e", 00:13:41.303 "is_configured": true, 00:13:41.303 "data_offset": 2048, 00:13:41.303 "data_size": 63488 00:13:41.303 } 00:13:41.303 ] 00:13:41.303 }' 00:13:41.303 18:44:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:41.303 18:44:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:41.873 18:44:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:41.873 18:44:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:41.873 18:44:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:41.873 18:44:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:41.873 18:44:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:41.873 18:44:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.873 18:44:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.873 18:44:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:41.873 18:44:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:41.873 18:44:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.873 18:44:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:41.873 "name": "raid_bdev1", 00:13:41.873 "uuid": "22d7a9b0-cca4-4aea-b1e7-53daaaf564c7", 00:13:41.873 "strip_size_kb": 64, 00:13:41.873 "state": "online", 00:13:41.873 "raid_level": "raid5f", 00:13:41.873 "superblock": true, 00:13:41.873 "num_base_bdevs": 3, 00:13:41.873 "num_base_bdevs_discovered": 2, 00:13:41.873 "num_base_bdevs_operational": 2, 00:13:41.873 "base_bdevs_list": [ 00:13:41.873 { 00:13:41.873 "name": null, 00:13:41.873 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:41.873 "is_configured": false, 00:13:41.873 "data_offset": 0, 00:13:41.873 "data_size": 63488 00:13:41.873 }, 00:13:41.873 { 00:13:41.873 "name": "BaseBdev2", 00:13:41.873 "uuid": "5625fda5-1e46-57cc-80f8-e4b3507d9545", 00:13:41.873 "is_configured": true, 00:13:41.873 "data_offset": 2048, 00:13:41.873 "data_size": 63488 00:13:41.873 }, 00:13:41.873 { 00:13:41.873 "name": "BaseBdev3", 00:13:41.873 "uuid": "4b774b6e-7cba-54fe-9ccd-3b472ce51c2e", 00:13:41.873 "is_configured": true, 00:13:41.873 "data_offset": 2048, 00:13:41.873 "data_size": 63488 00:13:41.873 } 00:13:41.873 ] 00:13:41.873 }' 00:13:41.873 18:44:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:41.873 18:44:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:41.873 18:44:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:41.873 18:44:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:41.873 18:44:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:13:41.873 18:44:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.873 18:44:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:41.873 18:44:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.873 18:44:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:41.873 18:44:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.873 18:44:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:41.873 [2024-12-15 18:44:42.201170] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:41.873 [2024-12-15 18:44:42.201269] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:41.873 [2024-12-15 18:44:42.201314] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:13:41.873 [2024-12-15 18:44:42.201347] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:41.873 [2024-12-15 18:44:42.201751] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:41.873 [2024-12-15 18:44:42.201776] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:41.873 [2024-12-15 18:44:42.201859] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:13:41.873 [2024-12-15 18:44:42.201875] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:13:41.873 [2024-12-15 18:44:42.201882] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:41.873 [2024-12-15 18:44:42.201893] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:13:41.873 BaseBdev1 00:13:41.873 18:44:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.873 18:44:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:13:42.813 18:44:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:13:42.813 18:44:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:42.813 18:44:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:42.813 18:44:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:42.813 18:44:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:42.813 18:44:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:42.813 18:44:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:42.813 18:44:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:42.813 18:44:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:42.813 18:44:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:42.813 18:44:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.813 18:44:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:42.813 18:44:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.813 18:44:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.813 18:44:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.073 18:44:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:43.073 "name": "raid_bdev1", 00:13:43.073 "uuid": "22d7a9b0-cca4-4aea-b1e7-53daaaf564c7", 00:13:43.073 "strip_size_kb": 64, 00:13:43.073 "state": "online", 00:13:43.073 "raid_level": "raid5f", 00:13:43.073 "superblock": true, 00:13:43.073 "num_base_bdevs": 3, 00:13:43.073 "num_base_bdevs_discovered": 2, 00:13:43.073 "num_base_bdevs_operational": 2, 00:13:43.073 "base_bdevs_list": [ 00:13:43.073 { 00:13:43.073 "name": null, 00:13:43.073 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:43.073 "is_configured": false, 00:13:43.073 "data_offset": 0, 00:13:43.073 "data_size": 63488 00:13:43.073 }, 00:13:43.073 { 00:13:43.073 "name": "BaseBdev2", 00:13:43.073 "uuid": "5625fda5-1e46-57cc-80f8-e4b3507d9545", 00:13:43.073 "is_configured": true, 00:13:43.073 "data_offset": 2048, 00:13:43.073 "data_size": 63488 00:13:43.073 }, 00:13:43.073 { 00:13:43.073 "name": "BaseBdev3", 00:13:43.073 "uuid": "4b774b6e-7cba-54fe-9ccd-3b472ce51c2e", 00:13:43.073 "is_configured": true, 00:13:43.073 "data_offset": 2048, 00:13:43.073 "data_size": 63488 00:13:43.073 } 00:13:43.073 ] 00:13:43.073 }' 00:13:43.073 18:44:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:43.073 18:44:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:43.333 18:44:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:43.333 18:44:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:43.333 18:44:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:43.333 18:44:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:43.333 18:44:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:43.333 18:44:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.333 18:44:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:43.333 18:44:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.333 18:44:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:43.333 18:44:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.333 18:44:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:43.333 "name": "raid_bdev1", 00:13:43.333 "uuid": "22d7a9b0-cca4-4aea-b1e7-53daaaf564c7", 00:13:43.333 "strip_size_kb": 64, 00:13:43.333 "state": "online", 00:13:43.333 "raid_level": "raid5f", 00:13:43.333 "superblock": true, 00:13:43.333 "num_base_bdevs": 3, 00:13:43.333 "num_base_bdevs_discovered": 2, 00:13:43.333 "num_base_bdevs_operational": 2, 00:13:43.333 "base_bdevs_list": [ 00:13:43.333 { 00:13:43.333 "name": null, 00:13:43.333 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:43.333 "is_configured": false, 00:13:43.333 "data_offset": 0, 00:13:43.333 "data_size": 63488 00:13:43.333 }, 00:13:43.333 { 00:13:43.333 "name": "BaseBdev2", 00:13:43.333 "uuid": "5625fda5-1e46-57cc-80f8-e4b3507d9545", 00:13:43.333 "is_configured": true, 00:13:43.333 "data_offset": 2048, 00:13:43.333 "data_size": 63488 00:13:43.333 }, 00:13:43.333 { 00:13:43.333 "name": "BaseBdev3", 00:13:43.333 "uuid": "4b774b6e-7cba-54fe-9ccd-3b472ce51c2e", 00:13:43.333 "is_configured": true, 00:13:43.333 "data_offset": 2048, 00:13:43.333 "data_size": 63488 00:13:43.333 } 00:13:43.333 ] 00:13:43.333 }' 00:13:43.333 18:44:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:43.333 18:44:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:43.333 18:44:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:43.592 18:44:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:43.592 18:44:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:43.592 18:44:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:13:43.592 18:44:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:43.592 18:44:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:13:43.592 18:44:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:43.593 18:44:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:13:43.593 18:44:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:43.593 18:44:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:43.593 18:44:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.593 18:44:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:43.593 [2024-12-15 18:44:43.810508] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:43.593 [2024-12-15 18:44:43.810723] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:13:43.593 [2024-12-15 18:44:43.810785] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:43.593 request: 00:13:43.593 { 00:13:43.593 "base_bdev": "BaseBdev1", 00:13:43.593 "raid_bdev": "raid_bdev1", 00:13:43.593 "method": "bdev_raid_add_base_bdev", 00:13:43.593 "req_id": 1 00:13:43.593 } 00:13:43.593 Got JSON-RPC error response 00:13:43.593 response: 00:13:43.593 { 00:13:43.593 "code": -22, 00:13:43.593 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:13:43.593 } 00:13:43.593 18:44:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:13:43.593 18:44:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:13:43.593 18:44:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:43.593 18:44:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:43.593 18:44:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:43.593 18:44:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:13:44.532 18:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:13:44.532 18:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:44.532 18:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:44.532 18:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:44.532 18:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:44.532 18:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:44.532 18:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:44.532 18:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:44.532 18:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:44.532 18:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:44.532 18:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:44.532 18:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:44.532 18:44:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.532 18:44:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.532 18:44:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.532 18:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:44.532 "name": "raid_bdev1", 00:13:44.532 "uuid": "22d7a9b0-cca4-4aea-b1e7-53daaaf564c7", 00:13:44.532 "strip_size_kb": 64, 00:13:44.532 "state": "online", 00:13:44.532 "raid_level": "raid5f", 00:13:44.532 "superblock": true, 00:13:44.532 "num_base_bdevs": 3, 00:13:44.532 "num_base_bdevs_discovered": 2, 00:13:44.532 "num_base_bdevs_operational": 2, 00:13:44.532 "base_bdevs_list": [ 00:13:44.532 { 00:13:44.532 "name": null, 00:13:44.532 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:44.532 "is_configured": false, 00:13:44.532 "data_offset": 0, 00:13:44.532 "data_size": 63488 00:13:44.532 }, 00:13:44.532 { 00:13:44.532 "name": "BaseBdev2", 00:13:44.532 "uuid": "5625fda5-1e46-57cc-80f8-e4b3507d9545", 00:13:44.532 "is_configured": true, 00:13:44.532 "data_offset": 2048, 00:13:44.532 "data_size": 63488 00:13:44.532 }, 00:13:44.532 { 00:13:44.532 "name": "BaseBdev3", 00:13:44.532 "uuid": "4b774b6e-7cba-54fe-9ccd-3b472ce51c2e", 00:13:44.532 "is_configured": true, 00:13:44.532 "data_offset": 2048, 00:13:44.532 "data_size": 63488 00:13:44.532 } 00:13:44.532 ] 00:13:44.532 }' 00:13:44.532 18:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:44.532 18:44:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.792 18:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:44.792 18:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:44.792 18:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:44.792 18:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:44.792 18:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:44.792 18:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:44.792 18:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:44.792 18:44:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.792 18:44:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:45.052 18:44:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.052 18:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:45.052 "name": "raid_bdev1", 00:13:45.052 "uuid": "22d7a9b0-cca4-4aea-b1e7-53daaaf564c7", 00:13:45.052 "strip_size_kb": 64, 00:13:45.052 "state": "online", 00:13:45.052 "raid_level": "raid5f", 00:13:45.052 "superblock": true, 00:13:45.052 "num_base_bdevs": 3, 00:13:45.052 "num_base_bdevs_discovered": 2, 00:13:45.052 "num_base_bdevs_operational": 2, 00:13:45.052 "base_bdevs_list": [ 00:13:45.052 { 00:13:45.052 "name": null, 00:13:45.052 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:45.052 "is_configured": false, 00:13:45.052 "data_offset": 0, 00:13:45.052 "data_size": 63488 00:13:45.052 }, 00:13:45.052 { 00:13:45.052 "name": "BaseBdev2", 00:13:45.052 "uuid": "5625fda5-1e46-57cc-80f8-e4b3507d9545", 00:13:45.052 "is_configured": true, 00:13:45.052 "data_offset": 2048, 00:13:45.052 "data_size": 63488 00:13:45.052 }, 00:13:45.052 { 00:13:45.052 "name": "BaseBdev3", 00:13:45.052 "uuid": "4b774b6e-7cba-54fe-9ccd-3b472ce51c2e", 00:13:45.052 "is_configured": true, 00:13:45.052 "data_offset": 2048, 00:13:45.052 "data_size": 63488 00:13:45.052 } 00:13:45.052 ] 00:13:45.052 }' 00:13:45.052 18:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:45.052 18:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:45.052 18:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:45.052 18:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:45.052 18:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 94438 00:13:45.052 18:44:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 94438 ']' 00:13:45.052 18:44:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 94438 00:13:45.052 18:44:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:13:45.052 18:44:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:45.052 18:44:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 94438 00:13:45.052 18:44:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:45.052 18:44:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:45.052 killing process with pid 94438 00:13:45.052 Received shutdown signal, test time was about 60.000000 seconds 00:13:45.052 00:13:45.052 Latency(us) 00:13:45.052 [2024-12-15T18:44:45.493Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:45.052 [2024-12-15T18:44:45.493Z] =================================================================================================================== 00:13:45.052 [2024-12-15T18:44:45.493Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:45.052 18:44:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 94438' 00:13:45.052 18:44:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 94438 00:13:45.052 [2024-12-15 18:44:45.413334] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:45.052 [2024-12-15 18:44:45.413451] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:45.052 18:44:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 94438 00:13:45.052 [2024-12-15 18:44:45.413523] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:45.052 [2024-12-15 18:44:45.413532] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state offline 00:13:45.052 [2024-12-15 18:44:45.453972] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:45.312 18:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:13:45.312 00:13:45.312 real 0m21.565s 00:13:45.312 user 0m28.129s 00:13:45.312 sys 0m2.744s 00:13:45.312 ************************************ 00:13:45.312 END TEST raid5f_rebuild_test_sb 00:13:45.312 ************************************ 00:13:45.312 18:44:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:45.312 18:44:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:45.312 18:44:45 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:13:45.312 18:44:45 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:13:45.312 18:44:45 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:45.312 18:44:45 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:45.312 18:44:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:45.312 ************************************ 00:13:45.312 START TEST raid5f_state_function_test 00:13:45.312 ************************************ 00:13:45.312 18:44:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 false 00:13:45.312 18:44:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:13:45.312 18:44:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:13:45.312 18:44:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:13:45.312 18:44:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:13:45.312 18:44:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:13:45.312 18:44:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:45.312 18:44:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:13:45.312 18:44:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:45.312 18:44:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:45.312 18:44:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:13:45.312 18:44:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:45.312 18:44:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:45.312 18:44:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:13:45.312 18:44:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:45.312 18:44:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:45.312 18:44:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:13:45.312 18:44:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:45.312 18:44:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:45.312 18:44:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:45.312 18:44:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:13:45.312 18:44:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:13:45.312 18:44:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:13:45.312 18:44:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:13:45.312 18:44:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:13:45.312 18:44:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:13:45.313 18:44:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:13:45.313 18:44:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:13:45.313 18:44:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:13:45.313 18:44:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:13:45.313 18:44:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=95173 00:13:45.313 18:44:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:45.313 18:44:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 95173' 00:13:45.313 Process raid pid: 95173 00:13:45.313 18:44:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 95173 00:13:45.313 18:44:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 95173 ']' 00:13:45.313 18:44:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:45.313 18:44:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:45.313 18:44:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:45.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:45.313 18:44:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:45.313 18:44:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.572 [2024-12-15 18:44:45.824777] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:13:45.572 [2024-12-15 18:44:45.824971] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:45.572 [2024-12-15 18:44:45.995559] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:45.832 [2024-12-15 18:44:46.021746] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:13:45.832 [2024-12-15 18:44:46.064262] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:45.832 [2024-12-15 18:44:46.064373] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:46.402 18:44:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:46.402 18:44:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:13:46.402 18:44:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:46.402 18:44:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.402 18:44:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.402 [2024-12-15 18:44:46.650969] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:46.402 [2024-12-15 18:44:46.651026] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:46.402 [2024-12-15 18:44:46.651040] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:46.402 [2024-12-15 18:44:46.651050] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:46.402 [2024-12-15 18:44:46.651056] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:46.402 [2024-12-15 18:44:46.651066] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:46.402 [2024-12-15 18:44:46.651072] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:46.402 [2024-12-15 18:44:46.651080] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:46.402 18:44:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.402 18:44:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:13:46.402 18:44:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:46.402 18:44:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:46.402 18:44:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:46.402 18:44:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:46.402 18:44:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:46.402 18:44:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:46.402 18:44:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:46.402 18:44:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:46.402 18:44:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:46.402 18:44:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.402 18:44:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:46.402 18:44:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.402 18:44:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.402 18:44:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.402 18:44:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:46.402 "name": "Existed_Raid", 00:13:46.402 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:46.402 "strip_size_kb": 64, 00:13:46.402 "state": "configuring", 00:13:46.402 "raid_level": "raid5f", 00:13:46.402 "superblock": false, 00:13:46.402 "num_base_bdevs": 4, 00:13:46.402 "num_base_bdevs_discovered": 0, 00:13:46.402 "num_base_bdevs_operational": 4, 00:13:46.402 "base_bdevs_list": [ 00:13:46.402 { 00:13:46.402 "name": "BaseBdev1", 00:13:46.402 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:46.402 "is_configured": false, 00:13:46.402 "data_offset": 0, 00:13:46.402 "data_size": 0 00:13:46.402 }, 00:13:46.402 { 00:13:46.402 "name": "BaseBdev2", 00:13:46.402 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:46.402 "is_configured": false, 00:13:46.402 "data_offset": 0, 00:13:46.402 "data_size": 0 00:13:46.402 }, 00:13:46.402 { 00:13:46.402 "name": "BaseBdev3", 00:13:46.402 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:46.402 "is_configured": false, 00:13:46.402 "data_offset": 0, 00:13:46.402 "data_size": 0 00:13:46.402 }, 00:13:46.402 { 00:13:46.402 "name": "BaseBdev4", 00:13:46.402 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:46.402 "is_configured": false, 00:13:46.402 "data_offset": 0, 00:13:46.402 "data_size": 0 00:13:46.402 } 00:13:46.402 ] 00:13:46.402 }' 00:13:46.402 18:44:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:46.402 18:44:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.661 18:44:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:46.661 18:44:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.661 18:44:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.661 [2024-12-15 18:44:47.094123] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:46.662 [2024-12-15 18:44:47.094223] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:13:46.662 18:44:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.662 18:44:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:46.662 18:44:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.662 18:44:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.922 [2024-12-15 18:44:47.102104] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:46.922 [2024-12-15 18:44:47.102191] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:46.922 [2024-12-15 18:44:47.102217] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:46.922 [2024-12-15 18:44:47.102256] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:46.922 [2024-12-15 18:44:47.102275] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:46.922 [2024-12-15 18:44:47.102296] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:46.922 [2024-12-15 18:44:47.102314] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:46.922 [2024-12-15 18:44:47.102351] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:46.922 18:44:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.922 18:44:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:46.922 18:44:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.922 18:44:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.922 [2024-12-15 18:44:47.119093] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:46.922 BaseBdev1 00:13:46.922 18:44:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.922 18:44:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:13:46.922 18:44:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:46.922 18:44:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:46.922 18:44:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:46.922 18:44:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:46.922 18:44:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:46.922 18:44:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:46.922 18:44:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.922 18:44:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.922 18:44:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.922 18:44:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:46.922 18:44:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.922 18:44:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.922 [ 00:13:46.922 { 00:13:46.922 "name": "BaseBdev1", 00:13:46.922 "aliases": [ 00:13:46.922 "11a9479e-f1ff-41ed-b247-7eb722806420" 00:13:46.922 ], 00:13:46.922 "product_name": "Malloc disk", 00:13:46.922 "block_size": 512, 00:13:46.922 "num_blocks": 65536, 00:13:46.922 "uuid": "11a9479e-f1ff-41ed-b247-7eb722806420", 00:13:46.922 "assigned_rate_limits": { 00:13:46.922 "rw_ios_per_sec": 0, 00:13:46.922 "rw_mbytes_per_sec": 0, 00:13:46.922 "r_mbytes_per_sec": 0, 00:13:46.922 "w_mbytes_per_sec": 0 00:13:46.922 }, 00:13:46.922 "claimed": true, 00:13:46.922 "claim_type": "exclusive_write", 00:13:46.922 "zoned": false, 00:13:46.922 "supported_io_types": { 00:13:46.922 "read": true, 00:13:46.922 "write": true, 00:13:46.922 "unmap": true, 00:13:46.922 "flush": true, 00:13:46.922 "reset": true, 00:13:46.922 "nvme_admin": false, 00:13:46.922 "nvme_io": false, 00:13:46.922 "nvme_io_md": false, 00:13:46.922 "write_zeroes": true, 00:13:46.922 "zcopy": true, 00:13:46.922 "get_zone_info": false, 00:13:46.922 "zone_management": false, 00:13:46.922 "zone_append": false, 00:13:46.922 "compare": false, 00:13:46.922 "compare_and_write": false, 00:13:46.922 "abort": true, 00:13:46.922 "seek_hole": false, 00:13:46.922 "seek_data": false, 00:13:46.922 "copy": true, 00:13:46.922 "nvme_iov_md": false 00:13:46.922 }, 00:13:46.922 "memory_domains": [ 00:13:46.922 { 00:13:46.922 "dma_device_id": "system", 00:13:46.922 "dma_device_type": 1 00:13:46.922 }, 00:13:46.922 { 00:13:46.922 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:46.922 "dma_device_type": 2 00:13:46.922 } 00:13:46.922 ], 00:13:46.922 "driver_specific": {} 00:13:46.922 } 00:13:46.922 ] 00:13:46.922 18:44:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.922 18:44:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:46.922 18:44:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:13:46.922 18:44:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:46.922 18:44:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:46.922 18:44:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:46.923 18:44:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:46.923 18:44:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:46.923 18:44:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:46.923 18:44:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:46.923 18:44:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:46.923 18:44:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:46.923 18:44:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.923 18:44:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:46.923 18:44:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.923 18:44:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.923 18:44:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.923 18:44:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:46.923 "name": "Existed_Raid", 00:13:46.923 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:46.923 "strip_size_kb": 64, 00:13:46.923 "state": "configuring", 00:13:46.923 "raid_level": "raid5f", 00:13:46.923 "superblock": false, 00:13:46.923 "num_base_bdevs": 4, 00:13:46.923 "num_base_bdevs_discovered": 1, 00:13:46.923 "num_base_bdevs_operational": 4, 00:13:46.923 "base_bdevs_list": [ 00:13:46.923 { 00:13:46.923 "name": "BaseBdev1", 00:13:46.923 "uuid": "11a9479e-f1ff-41ed-b247-7eb722806420", 00:13:46.923 "is_configured": true, 00:13:46.923 "data_offset": 0, 00:13:46.923 "data_size": 65536 00:13:46.923 }, 00:13:46.923 { 00:13:46.923 "name": "BaseBdev2", 00:13:46.923 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:46.923 "is_configured": false, 00:13:46.923 "data_offset": 0, 00:13:46.923 "data_size": 0 00:13:46.923 }, 00:13:46.923 { 00:13:46.923 "name": "BaseBdev3", 00:13:46.923 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:46.923 "is_configured": false, 00:13:46.923 "data_offset": 0, 00:13:46.923 "data_size": 0 00:13:46.923 }, 00:13:46.923 { 00:13:46.923 "name": "BaseBdev4", 00:13:46.923 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:46.923 "is_configured": false, 00:13:46.923 "data_offset": 0, 00:13:46.923 "data_size": 0 00:13:46.923 } 00:13:46.923 ] 00:13:46.923 }' 00:13:46.923 18:44:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:46.923 18:44:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.182 18:44:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:47.182 18:44:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.182 18:44:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.182 [2024-12-15 18:44:47.614406] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:47.182 [2024-12-15 18:44:47.614510] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:13:47.183 18:44:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.183 18:44:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:47.183 18:44:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.183 18:44:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.442 [2024-12-15 18:44:47.626441] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:47.442 [2024-12-15 18:44:47.628301] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:47.442 [2024-12-15 18:44:47.628374] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:47.442 [2024-12-15 18:44:47.628417] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:47.442 [2024-12-15 18:44:47.628439] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:47.442 [2024-12-15 18:44:47.628457] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:47.442 [2024-12-15 18:44:47.628477] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:47.442 18:44:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.442 18:44:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:13:47.442 18:44:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:47.442 18:44:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:13:47.442 18:44:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:47.442 18:44:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:47.442 18:44:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:47.442 18:44:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:47.442 18:44:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:47.442 18:44:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:47.442 18:44:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:47.442 18:44:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:47.442 18:44:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:47.442 18:44:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.442 18:44:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:47.442 18:44:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.442 18:44:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.442 18:44:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.442 18:44:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:47.442 "name": "Existed_Raid", 00:13:47.442 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:47.442 "strip_size_kb": 64, 00:13:47.442 "state": "configuring", 00:13:47.442 "raid_level": "raid5f", 00:13:47.442 "superblock": false, 00:13:47.442 "num_base_bdevs": 4, 00:13:47.442 "num_base_bdevs_discovered": 1, 00:13:47.442 "num_base_bdevs_operational": 4, 00:13:47.442 "base_bdevs_list": [ 00:13:47.442 { 00:13:47.442 "name": "BaseBdev1", 00:13:47.442 "uuid": "11a9479e-f1ff-41ed-b247-7eb722806420", 00:13:47.442 "is_configured": true, 00:13:47.442 "data_offset": 0, 00:13:47.442 "data_size": 65536 00:13:47.442 }, 00:13:47.442 { 00:13:47.442 "name": "BaseBdev2", 00:13:47.442 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:47.442 "is_configured": false, 00:13:47.442 "data_offset": 0, 00:13:47.442 "data_size": 0 00:13:47.442 }, 00:13:47.442 { 00:13:47.442 "name": "BaseBdev3", 00:13:47.442 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:47.442 "is_configured": false, 00:13:47.442 "data_offset": 0, 00:13:47.442 "data_size": 0 00:13:47.442 }, 00:13:47.442 { 00:13:47.442 "name": "BaseBdev4", 00:13:47.442 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:47.443 "is_configured": false, 00:13:47.443 "data_offset": 0, 00:13:47.443 "data_size": 0 00:13:47.443 } 00:13:47.443 ] 00:13:47.443 }' 00:13:47.443 18:44:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:47.443 18:44:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.702 18:44:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:47.702 18:44:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.702 18:44:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.702 [2024-12-15 18:44:48.048575] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:47.702 BaseBdev2 00:13:47.702 18:44:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.702 18:44:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:47.702 18:44:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:47.702 18:44:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:47.702 18:44:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:47.702 18:44:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:47.702 18:44:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:47.702 18:44:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:47.702 18:44:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.702 18:44:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.702 18:44:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.702 18:44:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:47.702 18:44:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.702 18:44:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.702 [ 00:13:47.702 { 00:13:47.702 "name": "BaseBdev2", 00:13:47.702 "aliases": [ 00:13:47.702 "4bb85bd0-709d-4901-bb07-474ccf8bcf02" 00:13:47.702 ], 00:13:47.702 "product_name": "Malloc disk", 00:13:47.702 "block_size": 512, 00:13:47.702 "num_blocks": 65536, 00:13:47.702 "uuid": "4bb85bd0-709d-4901-bb07-474ccf8bcf02", 00:13:47.702 "assigned_rate_limits": { 00:13:47.702 "rw_ios_per_sec": 0, 00:13:47.702 "rw_mbytes_per_sec": 0, 00:13:47.702 "r_mbytes_per_sec": 0, 00:13:47.702 "w_mbytes_per_sec": 0 00:13:47.703 }, 00:13:47.703 "claimed": true, 00:13:47.703 "claim_type": "exclusive_write", 00:13:47.703 "zoned": false, 00:13:47.703 "supported_io_types": { 00:13:47.703 "read": true, 00:13:47.703 "write": true, 00:13:47.703 "unmap": true, 00:13:47.703 "flush": true, 00:13:47.703 "reset": true, 00:13:47.703 "nvme_admin": false, 00:13:47.703 "nvme_io": false, 00:13:47.703 "nvme_io_md": false, 00:13:47.703 "write_zeroes": true, 00:13:47.703 "zcopy": true, 00:13:47.703 "get_zone_info": false, 00:13:47.703 "zone_management": false, 00:13:47.703 "zone_append": false, 00:13:47.703 "compare": false, 00:13:47.703 "compare_and_write": false, 00:13:47.703 "abort": true, 00:13:47.703 "seek_hole": false, 00:13:47.703 "seek_data": false, 00:13:47.703 "copy": true, 00:13:47.703 "nvme_iov_md": false 00:13:47.703 }, 00:13:47.703 "memory_domains": [ 00:13:47.703 { 00:13:47.703 "dma_device_id": "system", 00:13:47.703 "dma_device_type": 1 00:13:47.703 }, 00:13:47.703 { 00:13:47.703 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:47.703 "dma_device_type": 2 00:13:47.703 } 00:13:47.703 ], 00:13:47.703 "driver_specific": {} 00:13:47.703 } 00:13:47.703 ] 00:13:47.703 18:44:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.703 18:44:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:47.703 18:44:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:47.703 18:44:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:47.703 18:44:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:13:47.703 18:44:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:47.703 18:44:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:47.703 18:44:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:47.703 18:44:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:47.703 18:44:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:47.703 18:44:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:47.703 18:44:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:47.703 18:44:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:47.703 18:44:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:47.703 18:44:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.703 18:44:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.703 18:44:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:47.703 18:44:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.703 18:44:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.703 18:44:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:47.703 "name": "Existed_Raid", 00:13:47.703 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:47.703 "strip_size_kb": 64, 00:13:47.703 "state": "configuring", 00:13:47.703 "raid_level": "raid5f", 00:13:47.703 "superblock": false, 00:13:47.703 "num_base_bdevs": 4, 00:13:47.703 "num_base_bdevs_discovered": 2, 00:13:47.703 "num_base_bdevs_operational": 4, 00:13:47.703 "base_bdevs_list": [ 00:13:47.703 { 00:13:47.703 "name": "BaseBdev1", 00:13:47.703 "uuid": "11a9479e-f1ff-41ed-b247-7eb722806420", 00:13:47.703 "is_configured": true, 00:13:47.703 "data_offset": 0, 00:13:47.703 "data_size": 65536 00:13:47.703 }, 00:13:47.703 { 00:13:47.703 "name": "BaseBdev2", 00:13:47.703 "uuid": "4bb85bd0-709d-4901-bb07-474ccf8bcf02", 00:13:47.703 "is_configured": true, 00:13:47.703 "data_offset": 0, 00:13:47.703 "data_size": 65536 00:13:47.703 }, 00:13:47.703 { 00:13:47.703 "name": "BaseBdev3", 00:13:47.703 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:47.703 "is_configured": false, 00:13:47.703 "data_offset": 0, 00:13:47.703 "data_size": 0 00:13:47.703 }, 00:13:47.703 { 00:13:47.703 "name": "BaseBdev4", 00:13:47.703 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:47.703 "is_configured": false, 00:13:47.703 "data_offset": 0, 00:13:47.703 "data_size": 0 00:13:47.703 } 00:13:47.703 ] 00:13:47.703 }' 00:13:47.703 18:44:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:47.703 18:44:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.273 18:44:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:48.273 18:44:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.273 18:44:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.273 [2024-12-15 18:44:48.498765] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:48.273 BaseBdev3 00:13:48.273 18:44:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.273 18:44:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:13:48.273 18:44:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:13:48.273 18:44:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:48.273 18:44:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:48.273 18:44:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:48.273 18:44:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:48.273 18:44:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:48.273 18:44:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.273 18:44:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.273 18:44:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.273 18:44:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:48.273 18:44:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.273 18:44:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.273 [ 00:13:48.273 { 00:13:48.273 "name": "BaseBdev3", 00:13:48.273 "aliases": [ 00:13:48.273 "1eb3b419-74d7-4557-bdf5-3d96e0d94051" 00:13:48.273 ], 00:13:48.273 "product_name": "Malloc disk", 00:13:48.273 "block_size": 512, 00:13:48.273 "num_blocks": 65536, 00:13:48.273 "uuid": "1eb3b419-74d7-4557-bdf5-3d96e0d94051", 00:13:48.273 "assigned_rate_limits": { 00:13:48.273 "rw_ios_per_sec": 0, 00:13:48.273 "rw_mbytes_per_sec": 0, 00:13:48.273 "r_mbytes_per_sec": 0, 00:13:48.273 "w_mbytes_per_sec": 0 00:13:48.273 }, 00:13:48.273 "claimed": true, 00:13:48.273 "claim_type": "exclusive_write", 00:13:48.273 "zoned": false, 00:13:48.273 "supported_io_types": { 00:13:48.273 "read": true, 00:13:48.273 "write": true, 00:13:48.273 "unmap": true, 00:13:48.273 "flush": true, 00:13:48.273 "reset": true, 00:13:48.273 "nvme_admin": false, 00:13:48.273 "nvme_io": false, 00:13:48.273 "nvme_io_md": false, 00:13:48.273 "write_zeroes": true, 00:13:48.273 "zcopy": true, 00:13:48.273 "get_zone_info": false, 00:13:48.273 "zone_management": false, 00:13:48.273 "zone_append": false, 00:13:48.273 "compare": false, 00:13:48.273 "compare_and_write": false, 00:13:48.273 "abort": true, 00:13:48.273 "seek_hole": false, 00:13:48.273 "seek_data": false, 00:13:48.273 "copy": true, 00:13:48.273 "nvme_iov_md": false 00:13:48.273 }, 00:13:48.273 "memory_domains": [ 00:13:48.273 { 00:13:48.273 "dma_device_id": "system", 00:13:48.273 "dma_device_type": 1 00:13:48.273 }, 00:13:48.273 { 00:13:48.273 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:48.273 "dma_device_type": 2 00:13:48.273 } 00:13:48.273 ], 00:13:48.273 "driver_specific": {} 00:13:48.273 } 00:13:48.273 ] 00:13:48.273 18:44:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.273 18:44:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:48.273 18:44:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:48.273 18:44:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:48.273 18:44:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:13:48.273 18:44:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:48.273 18:44:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:48.273 18:44:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:48.273 18:44:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:48.273 18:44:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:48.273 18:44:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:48.273 18:44:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:48.273 18:44:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:48.273 18:44:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:48.273 18:44:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:48.273 18:44:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:48.273 18:44:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.273 18:44:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.273 18:44:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.273 18:44:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:48.273 "name": "Existed_Raid", 00:13:48.273 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:48.273 "strip_size_kb": 64, 00:13:48.273 "state": "configuring", 00:13:48.273 "raid_level": "raid5f", 00:13:48.273 "superblock": false, 00:13:48.273 "num_base_bdevs": 4, 00:13:48.273 "num_base_bdevs_discovered": 3, 00:13:48.273 "num_base_bdevs_operational": 4, 00:13:48.273 "base_bdevs_list": [ 00:13:48.273 { 00:13:48.273 "name": "BaseBdev1", 00:13:48.273 "uuid": "11a9479e-f1ff-41ed-b247-7eb722806420", 00:13:48.273 "is_configured": true, 00:13:48.273 "data_offset": 0, 00:13:48.273 "data_size": 65536 00:13:48.273 }, 00:13:48.273 { 00:13:48.273 "name": "BaseBdev2", 00:13:48.273 "uuid": "4bb85bd0-709d-4901-bb07-474ccf8bcf02", 00:13:48.273 "is_configured": true, 00:13:48.273 "data_offset": 0, 00:13:48.273 "data_size": 65536 00:13:48.273 }, 00:13:48.273 { 00:13:48.273 "name": "BaseBdev3", 00:13:48.274 "uuid": "1eb3b419-74d7-4557-bdf5-3d96e0d94051", 00:13:48.274 "is_configured": true, 00:13:48.274 "data_offset": 0, 00:13:48.274 "data_size": 65536 00:13:48.274 }, 00:13:48.274 { 00:13:48.274 "name": "BaseBdev4", 00:13:48.274 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:48.274 "is_configured": false, 00:13:48.274 "data_offset": 0, 00:13:48.274 "data_size": 0 00:13:48.274 } 00:13:48.274 ] 00:13:48.274 }' 00:13:48.274 18:44:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:48.274 18:44:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.534 18:44:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:13:48.534 18:44:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.534 18:44:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.534 [2024-12-15 18:44:48.916988] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:48.534 [2024-12-15 18:44:48.917117] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:13:48.534 [2024-12-15 18:44:48.917152] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:13:48.534 [2024-12-15 18:44:48.917458] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:13:48.534 [2024-12-15 18:44:48.918000] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:13:48.534 [2024-12-15 18:44:48.918056] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:13:48.534 [2024-12-15 18:44:48.918297] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:48.534 BaseBdev4 00:13:48.534 18:44:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.534 18:44:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:13:48.534 18:44:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:13:48.534 18:44:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:48.534 18:44:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:48.534 18:44:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:48.534 18:44:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:48.534 18:44:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:48.534 18:44:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.534 18:44:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.534 18:44:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.534 18:44:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:48.534 18:44:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.534 18:44:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.534 [ 00:13:48.534 { 00:13:48.534 "name": "BaseBdev4", 00:13:48.534 "aliases": [ 00:13:48.534 "6c11450b-c0c0-4a8d-8dbd-0cacef3d0b01" 00:13:48.534 ], 00:13:48.534 "product_name": "Malloc disk", 00:13:48.534 "block_size": 512, 00:13:48.534 "num_blocks": 65536, 00:13:48.534 "uuid": "6c11450b-c0c0-4a8d-8dbd-0cacef3d0b01", 00:13:48.534 "assigned_rate_limits": { 00:13:48.534 "rw_ios_per_sec": 0, 00:13:48.534 "rw_mbytes_per_sec": 0, 00:13:48.534 "r_mbytes_per_sec": 0, 00:13:48.534 "w_mbytes_per_sec": 0 00:13:48.534 }, 00:13:48.534 "claimed": true, 00:13:48.534 "claim_type": "exclusive_write", 00:13:48.534 "zoned": false, 00:13:48.534 "supported_io_types": { 00:13:48.534 "read": true, 00:13:48.534 "write": true, 00:13:48.534 "unmap": true, 00:13:48.534 "flush": true, 00:13:48.534 "reset": true, 00:13:48.534 "nvme_admin": false, 00:13:48.534 "nvme_io": false, 00:13:48.534 "nvme_io_md": false, 00:13:48.534 "write_zeroes": true, 00:13:48.534 "zcopy": true, 00:13:48.534 "get_zone_info": false, 00:13:48.534 "zone_management": false, 00:13:48.534 "zone_append": false, 00:13:48.534 "compare": false, 00:13:48.534 "compare_and_write": false, 00:13:48.534 "abort": true, 00:13:48.534 "seek_hole": false, 00:13:48.534 "seek_data": false, 00:13:48.534 "copy": true, 00:13:48.534 "nvme_iov_md": false 00:13:48.534 }, 00:13:48.534 "memory_domains": [ 00:13:48.534 { 00:13:48.534 "dma_device_id": "system", 00:13:48.534 "dma_device_type": 1 00:13:48.534 }, 00:13:48.534 { 00:13:48.534 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:48.534 "dma_device_type": 2 00:13:48.534 } 00:13:48.534 ], 00:13:48.534 "driver_specific": {} 00:13:48.534 } 00:13:48.534 ] 00:13:48.534 18:44:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.534 18:44:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:48.534 18:44:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:48.534 18:44:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:48.534 18:44:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:13:48.534 18:44:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:48.534 18:44:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:48.534 18:44:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:48.534 18:44:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:48.534 18:44:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:48.534 18:44:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:48.534 18:44:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:48.534 18:44:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:48.534 18:44:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:48.534 18:44:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:48.534 18:44:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:48.534 18:44:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.534 18:44:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.794 18:44:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.794 18:44:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:48.794 "name": "Existed_Raid", 00:13:48.794 "uuid": "cee7426e-8d90-429b-b775-29bc59c3cb41", 00:13:48.794 "strip_size_kb": 64, 00:13:48.794 "state": "online", 00:13:48.794 "raid_level": "raid5f", 00:13:48.794 "superblock": false, 00:13:48.794 "num_base_bdevs": 4, 00:13:48.794 "num_base_bdevs_discovered": 4, 00:13:48.794 "num_base_bdevs_operational": 4, 00:13:48.794 "base_bdevs_list": [ 00:13:48.794 { 00:13:48.794 "name": "BaseBdev1", 00:13:48.794 "uuid": "11a9479e-f1ff-41ed-b247-7eb722806420", 00:13:48.794 "is_configured": true, 00:13:48.794 "data_offset": 0, 00:13:48.794 "data_size": 65536 00:13:48.794 }, 00:13:48.794 { 00:13:48.794 "name": "BaseBdev2", 00:13:48.794 "uuid": "4bb85bd0-709d-4901-bb07-474ccf8bcf02", 00:13:48.794 "is_configured": true, 00:13:48.794 "data_offset": 0, 00:13:48.794 "data_size": 65536 00:13:48.794 }, 00:13:48.794 { 00:13:48.794 "name": "BaseBdev3", 00:13:48.794 "uuid": "1eb3b419-74d7-4557-bdf5-3d96e0d94051", 00:13:48.794 "is_configured": true, 00:13:48.794 "data_offset": 0, 00:13:48.794 "data_size": 65536 00:13:48.794 }, 00:13:48.794 { 00:13:48.794 "name": "BaseBdev4", 00:13:48.794 "uuid": "6c11450b-c0c0-4a8d-8dbd-0cacef3d0b01", 00:13:48.794 "is_configured": true, 00:13:48.794 "data_offset": 0, 00:13:48.794 "data_size": 65536 00:13:48.794 } 00:13:48.794 ] 00:13:48.794 }' 00:13:48.794 18:44:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:48.794 18:44:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.054 18:44:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:49.054 18:44:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:49.054 18:44:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:49.054 18:44:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:49.054 18:44:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:49.054 18:44:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:49.054 18:44:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:49.054 18:44:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.054 18:44:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.054 18:44:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:49.054 [2024-12-15 18:44:49.392711] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:49.054 18:44:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.054 18:44:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:49.054 "name": "Existed_Raid", 00:13:49.054 "aliases": [ 00:13:49.054 "cee7426e-8d90-429b-b775-29bc59c3cb41" 00:13:49.054 ], 00:13:49.054 "product_name": "Raid Volume", 00:13:49.054 "block_size": 512, 00:13:49.054 "num_blocks": 196608, 00:13:49.054 "uuid": "cee7426e-8d90-429b-b775-29bc59c3cb41", 00:13:49.054 "assigned_rate_limits": { 00:13:49.054 "rw_ios_per_sec": 0, 00:13:49.054 "rw_mbytes_per_sec": 0, 00:13:49.054 "r_mbytes_per_sec": 0, 00:13:49.054 "w_mbytes_per_sec": 0 00:13:49.054 }, 00:13:49.054 "claimed": false, 00:13:49.054 "zoned": false, 00:13:49.054 "supported_io_types": { 00:13:49.054 "read": true, 00:13:49.054 "write": true, 00:13:49.054 "unmap": false, 00:13:49.054 "flush": false, 00:13:49.054 "reset": true, 00:13:49.054 "nvme_admin": false, 00:13:49.054 "nvme_io": false, 00:13:49.054 "nvme_io_md": false, 00:13:49.054 "write_zeroes": true, 00:13:49.054 "zcopy": false, 00:13:49.054 "get_zone_info": false, 00:13:49.054 "zone_management": false, 00:13:49.054 "zone_append": false, 00:13:49.054 "compare": false, 00:13:49.054 "compare_and_write": false, 00:13:49.054 "abort": false, 00:13:49.054 "seek_hole": false, 00:13:49.054 "seek_data": false, 00:13:49.054 "copy": false, 00:13:49.054 "nvme_iov_md": false 00:13:49.054 }, 00:13:49.054 "driver_specific": { 00:13:49.054 "raid": { 00:13:49.054 "uuid": "cee7426e-8d90-429b-b775-29bc59c3cb41", 00:13:49.054 "strip_size_kb": 64, 00:13:49.054 "state": "online", 00:13:49.054 "raid_level": "raid5f", 00:13:49.054 "superblock": false, 00:13:49.054 "num_base_bdevs": 4, 00:13:49.054 "num_base_bdevs_discovered": 4, 00:13:49.054 "num_base_bdevs_operational": 4, 00:13:49.054 "base_bdevs_list": [ 00:13:49.054 { 00:13:49.054 "name": "BaseBdev1", 00:13:49.054 "uuid": "11a9479e-f1ff-41ed-b247-7eb722806420", 00:13:49.054 "is_configured": true, 00:13:49.054 "data_offset": 0, 00:13:49.054 "data_size": 65536 00:13:49.054 }, 00:13:49.054 { 00:13:49.054 "name": "BaseBdev2", 00:13:49.054 "uuid": "4bb85bd0-709d-4901-bb07-474ccf8bcf02", 00:13:49.054 "is_configured": true, 00:13:49.054 "data_offset": 0, 00:13:49.054 "data_size": 65536 00:13:49.054 }, 00:13:49.054 { 00:13:49.054 "name": "BaseBdev3", 00:13:49.054 "uuid": "1eb3b419-74d7-4557-bdf5-3d96e0d94051", 00:13:49.054 "is_configured": true, 00:13:49.054 "data_offset": 0, 00:13:49.054 "data_size": 65536 00:13:49.054 }, 00:13:49.054 { 00:13:49.054 "name": "BaseBdev4", 00:13:49.054 "uuid": "6c11450b-c0c0-4a8d-8dbd-0cacef3d0b01", 00:13:49.054 "is_configured": true, 00:13:49.054 "data_offset": 0, 00:13:49.054 "data_size": 65536 00:13:49.054 } 00:13:49.054 ] 00:13:49.054 } 00:13:49.054 } 00:13:49.054 }' 00:13:49.054 18:44:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:49.054 18:44:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:49.054 BaseBdev2 00:13:49.054 BaseBdev3 00:13:49.054 BaseBdev4' 00:13:49.054 18:44:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:49.315 18:44:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:49.315 18:44:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:49.315 18:44:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:49.315 18:44:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.315 18:44:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.315 18:44:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:49.315 18:44:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.315 18:44:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:49.315 18:44:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:49.315 18:44:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:49.315 18:44:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:49.315 18:44:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:49.315 18:44:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.315 18:44:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.315 18:44:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.315 18:44:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:49.315 18:44:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:49.315 18:44:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:49.315 18:44:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:49.315 18:44:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.315 18:44:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:49.315 18:44:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.315 18:44:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.315 18:44:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:49.315 18:44:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:49.315 18:44:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:49.315 18:44:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:13:49.315 18:44:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.315 18:44:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.315 18:44:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:49.315 18:44:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.315 18:44:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:49.315 18:44:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:49.315 18:44:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:49.315 18:44:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.315 18:44:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.315 [2024-12-15 18:44:49.695998] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:49.315 18:44:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.315 18:44:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:49.315 18:44:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:13:49.315 18:44:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:49.315 18:44:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:13:49.315 18:44:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:13:49.315 18:44:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:13:49.315 18:44:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:49.315 18:44:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:49.315 18:44:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:49.315 18:44:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:49.315 18:44:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:49.315 18:44:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:49.315 18:44:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:49.315 18:44:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:49.315 18:44:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:49.315 18:44:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.315 18:44:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:49.315 18:44:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.315 18:44:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.315 18:44:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.575 18:44:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:49.575 "name": "Existed_Raid", 00:13:49.575 "uuid": "cee7426e-8d90-429b-b775-29bc59c3cb41", 00:13:49.575 "strip_size_kb": 64, 00:13:49.575 "state": "online", 00:13:49.575 "raid_level": "raid5f", 00:13:49.575 "superblock": false, 00:13:49.575 "num_base_bdevs": 4, 00:13:49.575 "num_base_bdevs_discovered": 3, 00:13:49.575 "num_base_bdevs_operational": 3, 00:13:49.575 "base_bdevs_list": [ 00:13:49.575 { 00:13:49.575 "name": null, 00:13:49.575 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:49.575 "is_configured": false, 00:13:49.575 "data_offset": 0, 00:13:49.575 "data_size": 65536 00:13:49.575 }, 00:13:49.575 { 00:13:49.575 "name": "BaseBdev2", 00:13:49.575 "uuid": "4bb85bd0-709d-4901-bb07-474ccf8bcf02", 00:13:49.575 "is_configured": true, 00:13:49.575 "data_offset": 0, 00:13:49.575 "data_size": 65536 00:13:49.575 }, 00:13:49.575 { 00:13:49.575 "name": "BaseBdev3", 00:13:49.575 "uuid": "1eb3b419-74d7-4557-bdf5-3d96e0d94051", 00:13:49.575 "is_configured": true, 00:13:49.575 "data_offset": 0, 00:13:49.575 "data_size": 65536 00:13:49.575 }, 00:13:49.575 { 00:13:49.575 "name": "BaseBdev4", 00:13:49.575 "uuid": "6c11450b-c0c0-4a8d-8dbd-0cacef3d0b01", 00:13:49.575 "is_configured": true, 00:13:49.575 "data_offset": 0, 00:13:49.575 "data_size": 65536 00:13:49.575 } 00:13:49.575 ] 00:13:49.575 }' 00:13:49.575 18:44:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:49.575 18:44:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.835 18:44:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:13:49.835 18:44:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:49.835 18:44:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.835 18:44:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:49.835 18:44:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.835 18:44:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.835 18:44:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.835 18:44:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:49.835 18:44:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:49.835 18:44:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:13:49.835 18:44:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.835 18:44:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.835 [2024-12-15 18:44:50.186444] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:49.835 [2024-12-15 18:44:50.186539] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:49.835 [2024-12-15 18:44:50.197658] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:49.835 18:44:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.835 18:44:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:49.835 18:44:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:49.835 18:44:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.835 18:44:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:49.835 18:44:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.835 18:44:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.835 18:44:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.835 18:44:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:49.835 18:44:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:49.835 18:44:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:13:49.835 18:44:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.835 18:44:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.835 [2024-12-15 18:44:50.257571] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:49.835 18:44:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.835 18:44:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:49.835 18:44:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:49.835 18:44:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:50.095 18:44:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.095 18:44:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.095 18:44:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:50.095 18:44:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.095 18:44:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:50.095 18:44:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:50.096 18:44:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:13:50.096 18:44:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.096 18:44:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.096 [2024-12-15 18:44:50.328736] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:13:50.096 [2024-12-15 18:44:50.328866] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:13:50.096 18:44:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.096 18:44:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:50.096 18:44:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:50.096 18:44:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:13:50.096 18:44:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:50.096 18:44:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.096 18:44:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.096 18:44:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.096 18:44:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:13:50.096 18:44:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:13:50.096 18:44:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:13:50.096 18:44:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:13:50.096 18:44:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:50.096 18:44:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:50.096 18:44:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.096 18:44:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.096 BaseBdev2 00:13:50.096 18:44:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.096 18:44:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:13:50.096 18:44:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:50.096 18:44:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:50.096 18:44:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:50.096 18:44:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:50.096 18:44:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:50.096 18:44:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:50.096 18:44:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.096 18:44:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.096 18:44:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.096 18:44:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:50.096 18:44:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.096 18:44:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.096 [ 00:13:50.096 { 00:13:50.096 "name": "BaseBdev2", 00:13:50.096 "aliases": [ 00:13:50.096 "48ae1dd4-a806-4fad-9fa3-ad11e3cf78ba" 00:13:50.096 ], 00:13:50.096 "product_name": "Malloc disk", 00:13:50.096 "block_size": 512, 00:13:50.096 "num_blocks": 65536, 00:13:50.096 "uuid": "48ae1dd4-a806-4fad-9fa3-ad11e3cf78ba", 00:13:50.096 "assigned_rate_limits": { 00:13:50.096 "rw_ios_per_sec": 0, 00:13:50.096 "rw_mbytes_per_sec": 0, 00:13:50.096 "r_mbytes_per_sec": 0, 00:13:50.096 "w_mbytes_per_sec": 0 00:13:50.096 }, 00:13:50.096 "claimed": false, 00:13:50.096 "zoned": false, 00:13:50.096 "supported_io_types": { 00:13:50.096 "read": true, 00:13:50.096 "write": true, 00:13:50.096 "unmap": true, 00:13:50.096 "flush": true, 00:13:50.096 "reset": true, 00:13:50.096 "nvme_admin": false, 00:13:50.096 "nvme_io": false, 00:13:50.096 "nvme_io_md": false, 00:13:50.096 "write_zeroes": true, 00:13:50.096 "zcopy": true, 00:13:50.096 "get_zone_info": false, 00:13:50.096 "zone_management": false, 00:13:50.096 "zone_append": false, 00:13:50.096 "compare": false, 00:13:50.096 "compare_and_write": false, 00:13:50.096 "abort": true, 00:13:50.096 "seek_hole": false, 00:13:50.096 "seek_data": false, 00:13:50.096 "copy": true, 00:13:50.096 "nvme_iov_md": false 00:13:50.096 }, 00:13:50.096 "memory_domains": [ 00:13:50.096 { 00:13:50.096 "dma_device_id": "system", 00:13:50.096 "dma_device_type": 1 00:13:50.096 }, 00:13:50.096 { 00:13:50.096 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:50.096 "dma_device_type": 2 00:13:50.096 } 00:13:50.096 ], 00:13:50.096 "driver_specific": {} 00:13:50.096 } 00:13:50.096 ] 00:13:50.096 18:44:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.096 18:44:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:50.096 18:44:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:50.096 18:44:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:50.096 18:44:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:50.096 18:44:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.096 18:44:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.096 BaseBdev3 00:13:50.096 18:44:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.096 18:44:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:13:50.096 18:44:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:13:50.096 18:44:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:50.096 18:44:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:50.096 18:44:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:50.096 18:44:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:50.096 18:44:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:50.096 18:44:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.096 18:44:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.096 18:44:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.096 18:44:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:50.096 18:44:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.096 18:44:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.096 [ 00:13:50.096 { 00:13:50.096 "name": "BaseBdev3", 00:13:50.096 "aliases": [ 00:13:50.096 "dd1ca3d3-7e84-458f-8387-eb066ed1c347" 00:13:50.096 ], 00:13:50.096 "product_name": "Malloc disk", 00:13:50.096 "block_size": 512, 00:13:50.096 "num_blocks": 65536, 00:13:50.096 "uuid": "dd1ca3d3-7e84-458f-8387-eb066ed1c347", 00:13:50.096 "assigned_rate_limits": { 00:13:50.096 "rw_ios_per_sec": 0, 00:13:50.096 "rw_mbytes_per_sec": 0, 00:13:50.096 "r_mbytes_per_sec": 0, 00:13:50.096 "w_mbytes_per_sec": 0 00:13:50.096 }, 00:13:50.096 "claimed": false, 00:13:50.096 "zoned": false, 00:13:50.096 "supported_io_types": { 00:13:50.096 "read": true, 00:13:50.096 "write": true, 00:13:50.096 "unmap": true, 00:13:50.096 "flush": true, 00:13:50.096 "reset": true, 00:13:50.096 "nvme_admin": false, 00:13:50.096 "nvme_io": false, 00:13:50.096 "nvme_io_md": false, 00:13:50.096 "write_zeroes": true, 00:13:50.096 "zcopy": true, 00:13:50.096 "get_zone_info": false, 00:13:50.096 "zone_management": false, 00:13:50.096 "zone_append": false, 00:13:50.096 "compare": false, 00:13:50.096 "compare_and_write": false, 00:13:50.096 "abort": true, 00:13:50.096 "seek_hole": false, 00:13:50.096 "seek_data": false, 00:13:50.096 "copy": true, 00:13:50.096 "nvme_iov_md": false 00:13:50.096 }, 00:13:50.096 "memory_domains": [ 00:13:50.096 { 00:13:50.096 "dma_device_id": "system", 00:13:50.096 "dma_device_type": 1 00:13:50.096 }, 00:13:50.096 { 00:13:50.096 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:50.096 "dma_device_type": 2 00:13:50.096 } 00:13:50.096 ], 00:13:50.096 "driver_specific": {} 00:13:50.096 } 00:13:50.096 ] 00:13:50.096 18:44:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.096 18:44:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:50.096 18:44:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:50.096 18:44:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:50.096 18:44:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:13:50.096 18:44:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.096 18:44:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.096 BaseBdev4 00:13:50.096 18:44:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.096 18:44:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:13:50.096 18:44:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:13:50.096 18:44:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:50.096 18:44:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:50.096 18:44:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:50.096 18:44:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:50.096 18:44:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:50.096 18:44:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.096 18:44:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.096 18:44:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.097 18:44:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:50.097 18:44:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.097 18:44:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.097 [ 00:13:50.097 { 00:13:50.097 "name": "BaseBdev4", 00:13:50.097 "aliases": [ 00:13:50.097 "624d2a6f-18b9-41ff-8c14-c37e6eb29e6a" 00:13:50.097 ], 00:13:50.097 "product_name": "Malloc disk", 00:13:50.097 "block_size": 512, 00:13:50.097 "num_blocks": 65536, 00:13:50.097 "uuid": "624d2a6f-18b9-41ff-8c14-c37e6eb29e6a", 00:13:50.097 "assigned_rate_limits": { 00:13:50.097 "rw_ios_per_sec": 0, 00:13:50.097 "rw_mbytes_per_sec": 0, 00:13:50.097 "r_mbytes_per_sec": 0, 00:13:50.097 "w_mbytes_per_sec": 0 00:13:50.097 }, 00:13:50.097 "claimed": false, 00:13:50.097 "zoned": false, 00:13:50.097 "supported_io_types": { 00:13:50.097 "read": true, 00:13:50.097 "write": true, 00:13:50.097 "unmap": true, 00:13:50.097 "flush": true, 00:13:50.097 "reset": true, 00:13:50.097 "nvme_admin": false, 00:13:50.357 "nvme_io": false, 00:13:50.357 "nvme_io_md": false, 00:13:50.357 "write_zeroes": true, 00:13:50.357 "zcopy": true, 00:13:50.357 "get_zone_info": false, 00:13:50.357 "zone_management": false, 00:13:50.357 "zone_append": false, 00:13:50.357 "compare": false, 00:13:50.357 "compare_and_write": false, 00:13:50.357 "abort": true, 00:13:50.357 "seek_hole": false, 00:13:50.357 "seek_data": false, 00:13:50.357 "copy": true, 00:13:50.357 "nvme_iov_md": false 00:13:50.357 }, 00:13:50.357 "memory_domains": [ 00:13:50.357 { 00:13:50.357 "dma_device_id": "system", 00:13:50.357 "dma_device_type": 1 00:13:50.357 }, 00:13:50.357 { 00:13:50.357 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:50.357 "dma_device_type": 2 00:13:50.357 } 00:13:50.357 ], 00:13:50.357 "driver_specific": {} 00:13:50.357 } 00:13:50.357 ] 00:13:50.357 18:44:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.357 18:44:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:50.357 18:44:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:50.357 18:44:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:50.357 18:44:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:50.357 18:44:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.357 18:44:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.357 [2024-12-15 18:44:50.549486] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:50.357 [2024-12-15 18:44:50.549615] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:50.357 [2024-12-15 18:44:50.549658] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:50.357 [2024-12-15 18:44:50.551447] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:50.357 [2024-12-15 18:44:50.551549] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:50.357 18:44:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.357 18:44:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:13:50.357 18:44:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:50.357 18:44:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:50.357 18:44:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:50.357 18:44:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:50.357 18:44:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:50.357 18:44:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:50.357 18:44:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:50.357 18:44:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:50.357 18:44:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:50.357 18:44:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:50.357 18:44:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:50.357 18:44:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.357 18:44:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.357 18:44:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.357 18:44:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:50.357 "name": "Existed_Raid", 00:13:50.357 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:50.357 "strip_size_kb": 64, 00:13:50.357 "state": "configuring", 00:13:50.357 "raid_level": "raid5f", 00:13:50.357 "superblock": false, 00:13:50.357 "num_base_bdevs": 4, 00:13:50.357 "num_base_bdevs_discovered": 3, 00:13:50.357 "num_base_bdevs_operational": 4, 00:13:50.357 "base_bdevs_list": [ 00:13:50.357 { 00:13:50.357 "name": "BaseBdev1", 00:13:50.357 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:50.357 "is_configured": false, 00:13:50.357 "data_offset": 0, 00:13:50.357 "data_size": 0 00:13:50.357 }, 00:13:50.357 { 00:13:50.357 "name": "BaseBdev2", 00:13:50.357 "uuid": "48ae1dd4-a806-4fad-9fa3-ad11e3cf78ba", 00:13:50.357 "is_configured": true, 00:13:50.357 "data_offset": 0, 00:13:50.357 "data_size": 65536 00:13:50.357 }, 00:13:50.357 { 00:13:50.357 "name": "BaseBdev3", 00:13:50.357 "uuid": "dd1ca3d3-7e84-458f-8387-eb066ed1c347", 00:13:50.357 "is_configured": true, 00:13:50.357 "data_offset": 0, 00:13:50.357 "data_size": 65536 00:13:50.357 }, 00:13:50.357 { 00:13:50.357 "name": "BaseBdev4", 00:13:50.357 "uuid": "624d2a6f-18b9-41ff-8c14-c37e6eb29e6a", 00:13:50.357 "is_configured": true, 00:13:50.357 "data_offset": 0, 00:13:50.357 "data_size": 65536 00:13:50.357 } 00:13:50.358 ] 00:13:50.358 }' 00:13:50.358 18:44:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:50.358 18:44:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.618 18:44:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:50.618 18:44:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.618 18:44:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.618 [2024-12-15 18:44:50.932819] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:50.618 18:44:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.618 18:44:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:13:50.618 18:44:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:50.618 18:44:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:50.618 18:44:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:50.618 18:44:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:50.618 18:44:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:50.618 18:44:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:50.618 18:44:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:50.618 18:44:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:50.618 18:44:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:50.618 18:44:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:50.618 18:44:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:50.618 18:44:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.618 18:44:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.618 18:44:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.618 18:44:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:50.618 "name": "Existed_Raid", 00:13:50.618 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:50.618 "strip_size_kb": 64, 00:13:50.618 "state": "configuring", 00:13:50.618 "raid_level": "raid5f", 00:13:50.618 "superblock": false, 00:13:50.618 "num_base_bdevs": 4, 00:13:50.618 "num_base_bdevs_discovered": 2, 00:13:50.618 "num_base_bdevs_operational": 4, 00:13:50.618 "base_bdevs_list": [ 00:13:50.618 { 00:13:50.618 "name": "BaseBdev1", 00:13:50.618 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:50.618 "is_configured": false, 00:13:50.618 "data_offset": 0, 00:13:50.618 "data_size": 0 00:13:50.618 }, 00:13:50.618 { 00:13:50.618 "name": null, 00:13:50.618 "uuid": "48ae1dd4-a806-4fad-9fa3-ad11e3cf78ba", 00:13:50.618 "is_configured": false, 00:13:50.618 "data_offset": 0, 00:13:50.618 "data_size": 65536 00:13:50.618 }, 00:13:50.618 { 00:13:50.618 "name": "BaseBdev3", 00:13:50.618 "uuid": "dd1ca3d3-7e84-458f-8387-eb066ed1c347", 00:13:50.618 "is_configured": true, 00:13:50.618 "data_offset": 0, 00:13:50.618 "data_size": 65536 00:13:50.618 }, 00:13:50.618 { 00:13:50.618 "name": "BaseBdev4", 00:13:50.618 "uuid": "624d2a6f-18b9-41ff-8c14-c37e6eb29e6a", 00:13:50.618 "is_configured": true, 00:13:50.618 "data_offset": 0, 00:13:50.618 "data_size": 65536 00:13:50.618 } 00:13:50.618 ] 00:13:50.618 }' 00:13:50.618 18:44:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:50.618 18:44:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.188 18:44:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:51.188 18:44:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:51.188 18:44:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.188 18:44:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.188 18:44:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.188 18:44:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:13:51.188 18:44:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:51.188 18:44:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.188 18:44:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.188 [2024-12-15 18:44:51.423139] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:51.188 BaseBdev1 00:13:51.188 18:44:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.188 18:44:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:13:51.188 18:44:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:51.188 18:44:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:51.188 18:44:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:51.188 18:44:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:51.188 18:44:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:51.188 18:44:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:51.188 18:44:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.188 18:44:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.188 18:44:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.188 18:44:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:51.188 18:44:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.188 18:44:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.188 [ 00:13:51.188 { 00:13:51.188 "name": "BaseBdev1", 00:13:51.188 "aliases": [ 00:13:51.188 "a3ba0272-f9c6-4421-a17f-146b819b9cb0" 00:13:51.188 ], 00:13:51.188 "product_name": "Malloc disk", 00:13:51.188 "block_size": 512, 00:13:51.188 "num_blocks": 65536, 00:13:51.188 "uuid": "a3ba0272-f9c6-4421-a17f-146b819b9cb0", 00:13:51.188 "assigned_rate_limits": { 00:13:51.188 "rw_ios_per_sec": 0, 00:13:51.188 "rw_mbytes_per_sec": 0, 00:13:51.188 "r_mbytes_per_sec": 0, 00:13:51.188 "w_mbytes_per_sec": 0 00:13:51.188 }, 00:13:51.188 "claimed": true, 00:13:51.188 "claim_type": "exclusive_write", 00:13:51.188 "zoned": false, 00:13:51.188 "supported_io_types": { 00:13:51.188 "read": true, 00:13:51.188 "write": true, 00:13:51.188 "unmap": true, 00:13:51.188 "flush": true, 00:13:51.188 "reset": true, 00:13:51.188 "nvme_admin": false, 00:13:51.188 "nvme_io": false, 00:13:51.188 "nvme_io_md": false, 00:13:51.188 "write_zeroes": true, 00:13:51.188 "zcopy": true, 00:13:51.188 "get_zone_info": false, 00:13:51.188 "zone_management": false, 00:13:51.188 "zone_append": false, 00:13:51.188 "compare": false, 00:13:51.188 "compare_and_write": false, 00:13:51.188 "abort": true, 00:13:51.188 "seek_hole": false, 00:13:51.188 "seek_data": false, 00:13:51.188 "copy": true, 00:13:51.188 "nvme_iov_md": false 00:13:51.188 }, 00:13:51.188 "memory_domains": [ 00:13:51.188 { 00:13:51.188 "dma_device_id": "system", 00:13:51.188 "dma_device_type": 1 00:13:51.188 }, 00:13:51.188 { 00:13:51.188 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:51.188 "dma_device_type": 2 00:13:51.188 } 00:13:51.188 ], 00:13:51.188 "driver_specific": {} 00:13:51.188 } 00:13:51.188 ] 00:13:51.188 18:44:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.188 18:44:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:51.188 18:44:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:13:51.188 18:44:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:51.188 18:44:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:51.188 18:44:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:51.188 18:44:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:51.188 18:44:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:51.188 18:44:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:51.188 18:44:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:51.188 18:44:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:51.188 18:44:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:51.188 18:44:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:51.188 18:44:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:51.188 18:44:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.188 18:44:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.188 18:44:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.189 18:44:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:51.189 "name": "Existed_Raid", 00:13:51.189 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:51.189 "strip_size_kb": 64, 00:13:51.189 "state": "configuring", 00:13:51.189 "raid_level": "raid5f", 00:13:51.189 "superblock": false, 00:13:51.189 "num_base_bdevs": 4, 00:13:51.189 "num_base_bdevs_discovered": 3, 00:13:51.189 "num_base_bdevs_operational": 4, 00:13:51.189 "base_bdevs_list": [ 00:13:51.189 { 00:13:51.189 "name": "BaseBdev1", 00:13:51.189 "uuid": "a3ba0272-f9c6-4421-a17f-146b819b9cb0", 00:13:51.189 "is_configured": true, 00:13:51.189 "data_offset": 0, 00:13:51.189 "data_size": 65536 00:13:51.189 }, 00:13:51.189 { 00:13:51.189 "name": null, 00:13:51.189 "uuid": "48ae1dd4-a806-4fad-9fa3-ad11e3cf78ba", 00:13:51.189 "is_configured": false, 00:13:51.189 "data_offset": 0, 00:13:51.189 "data_size": 65536 00:13:51.189 }, 00:13:51.189 { 00:13:51.189 "name": "BaseBdev3", 00:13:51.189 "uuid": "dd1ca3d3-7e84-458f-8387-eb066ed1c347", 00:13:51.189 "is_configured": true, 00:13:51.189 "data_offset": 0, 00:13:51.189 "data_size": 65536 00:13:51.189 }, 00:13:51.189 { 00:13:51.189 "name": "BaseBdev4", 00:13:51.189 "uuid": "624d2a6f-18b9-41ff-8c14-c37e6eb29e6a", 00:13:51.189 "is_configured": true, 00:13:51.189 "data_offset": 0, 00:13:51.189 "data_size": 65536 00:13:51.189 } 00:13:51.189 ] 00:13:51.189 }' 00:13:51.189 18:44:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:51.189 18:44:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.448 18:44:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:51.448 18:44:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:51.448 18:44:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.448 18:44:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.448 18:44:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.708 18:44:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:13:51.708 18:44:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:13:51.709 18:44:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.709 18:44:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.709 [2024-12-15 18:44:51.918335] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:51.709 18:44:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.709 18:44:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:13:51.709 18:44:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:51.709 18:44:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:51.709 18:44:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:51.709 18:44:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:51.709 18:44:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:51.709 18:44:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:51.709 18:44:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:51.709 18:44:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:51.709 18:44:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:51.709 18:44:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:51.709 18:44:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:51.709 18:44:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.709 18:44:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.709 18:44:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.709 18:44:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:51.709 "name": "Existed_Raid", 00:13:51.709 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:51.709 "strip_size_kb": 64, 00:13:51.709 "state": "configuring", 00:13:51.709 "raid_level": "raid5f", 00:13:51.709 "superblock": false, 00:13:51.709 "num_base_bdevs": 4, 00:13:51.709 "num_base_bdevs_discovered": 2, 00:13:51.709 "num_base_bdevs_operational": 4, 00:13:51.709 "base_bdevs_list": [ 00:13:51.709 { 00:13:51.709 "name": "BaseBdev1", 00:13:51.709 "uuid": "a3ba0272-f9c6-4421-a17f-146b819b9cb0", 00:13:51.709 "is_configured": true, 00:13:51.709 "data_offset": 0, 00:13:51.709 "data_size": 65536 00:13:51.709 }, 00:13:51.709 { 00:13:51.709 "name": null, 00:13:51.709 "uuid": "48ae1dd4-a806-4fad-9fa3-ad11e3cf78ba", 00:13:51.709 "is_configured": false, 00:13:51.709 "data_offset": 0, 00:13:51.709 "data_size": 65536 00:13:51.709 }, 00:13:51.709 { 00:13:51.709 "name": null, 00:13:51.709 "uuid": "dd1ca3d3-7e84-458f-8387-eb066ed1c347", 00:13:51.709 "is_configured": false, 00:13:51.709 "data_offset": 0, 00:13:51.709 "data_size": 65536 00:13:51.709 }, 00:13:51.709 { 00:13:51.709 "name": "BaseBdev4", 00:13:51.709 "uuid": "624d2a6f-18b9-41ff-8c14-c37e6eb29e6a", 00:13:51.709 "is_configured": true, 00:13:51.709 "data_offset": 0, 00:13:51.709 "data_size": 65536 00:13:51.709 } 00:13:51.709 ] 00:13:51.709 }' 00:13:51.709 18:44:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:51.709 18:44:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.968 18:44:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:51.968 18:44:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:51.968 18:44:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.968 18:44:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.968 18:44:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.228 18:44:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:13:52.228 18:44:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:52.228 18:44:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.228 18:44:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.228 [2024-12-15 18:44:52.429497] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:52.228 18:44:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.228 18:44:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:13:52.228 18:44:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:52.228 18:44:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:52.228 18:44:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:52.228 18:44:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:52.228 18:44:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:52.228 18:44:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:52.228 18:44:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:52.228 18:44:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:52.228 18:44:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:52.228 18:44:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.228 18:44:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:52.228 18:44:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.228 18:44:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.228 18:44:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.228 18:44:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:52.228 "name": "Existed_Raid", 00:13:52.228 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:52.228 "strip_size_kb": 64, 00:13:52.228 "state": "configuring", 00:13:52.228 "raid_level": "raid5f", 00:13:52.228 "superblock": false, 00:13:52.228 "num_base_bdevs": 4, 00:13:52.228 "num_base_bdevs_discovered": 3, 00:13:52.228 "num_base_bdevs_operational": 4, 00:13:52.228 "base_bdevs_list": [ 00:13:52.228 { 00:13:52.228 "name": "BaseBdev1", 00:13:52.228 "uuid": "a3ba0272-f9c6-4421-a17f-146b819b9cb0", 00:13:52.228 "is_configured": true, 00:13:52.228 "data_offset": 0, 00:13:52.228 "data_size": 65536 00:13:52.228 }, 00:13:52.228 { 00:13:52.228 "name": null, 00:13:52.228 "uuid": "48ae1dd4-a806-4fad-9fa3-ad11e3cf78ba", 00:13:52.228 "is_configured": false, 00:13:52.228 "data_offset": 0, 00:13:52.228 "data_size": 65536 00:13:52.228 }, 00:13:52.228 { 00:13:52.228 "name": "BaseBdev3", 00:13:52.228 "uuid": "dd1ca3d3-7e84-458f-8387-eb066ed1c347", 00:13:52.228 "is_configured": true, 00:13:52.229 "data_offset": 0, 00:13:52.229 "data_size": 65536 00:13:52.229 }, 00:13:52.229 { 00:13:52.229 "name": "BaseBdev4", 00:13:52.229 "uuid": "624d2a6f-18b9-41ff-8c14-c37e6eb29e6a", 00:13:52.229 "is_configured": true, 00:13:52.229 "data_offset": 0, 00:13:52.229 "data_size": 65536 00:13:52.229 } 00:13:52.229 ] 00:13:52.229 }' 00:13:52.229 18:44:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:52.229 18:44:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.489 18:44:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.489 18:44:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:52.489 18:44:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.489 18:44:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.489 18:44:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.489 18:44:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:13:52.489 18:44:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:52.489 18:44:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.489 18:44:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.489 [2024-12-15 18:44:52.916802] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:52.489 18:44:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.489 18:44:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:13:52.749 18:44:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:52.749 18:44:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:52.749 18:44:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:52.749 18:44:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:52.749 18:44:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:52.749 18:44:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:52.749 18:44:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:52.749 18:44:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:52.749 18:44:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:52.749 18:44:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.749 18:44:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:52.749 18:44:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.749 18:44:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.749 18:44:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.749 18:44:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:52.749 "name": "Existed_Raid", 00:13:52.749 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:52.749 "strip_size_kb": 64, 00:13:52.749 "state": "configuring", 00:13:52.749 "raid_level": "raid5f", 00:13:52.749 "superblock": false, 00:13:52.749 "num_base_bdevs": 4, 00:13:52.749 "num_base_bdevs_discovered": 2, 00:13:52.749 "num_base_bdevs_operational": 4, 00:13:52.749 "base_bdevs_list": [ 00:13:52.749 { 00:13:52.749 "name": null, 00:13:52.749 "uuid": "a3ba0272-f9c6-4421-a17f-146b819b9cb0", 00:13:52.749 "is_configured": false, 00:13:52.749 "data_offset": 0, 00:13:52.749 "data_size": 65536 00:13:52.749 }, 00:13:52.749 { 00:13:52.749 "name": null, 00:13:52.749 "uuid": "48ae1dd4-a806-4fad-9fa3-ad11e3cf78ba", 00:13:52.749 "is_configured": false, 00:13:52.749 "data_offset": 0, 00:13:52.749 "data_size": 65536 00:13:52.749 }, 00:13:52.749 { 00:13:52.749 "name": "BaseBdev3", 00:13:52.749 "uuid": "dd1ca3d3-7e84-458f-8387-eb066ed1c347", 00:13:52.749 "is_configured": true, 00:13:52.749 "data_offset": 0, 00:13:52.749 "data_size": 65536 00:13:52.749 }, 00:13:52.749 { 00:13:52.749 "name": "BaseBdev4", 00:13:52.749 "uuid": "624d2a6f-18b9-41ff-8c14-c37e6eb29e6a", 00:13:52.749 "is_configured": true, 00:13:52.749 "data_offset": 0, 00:13:52.749 "data_size": 65536 00:13:52.749 } 00:13:52.749 ] 00:13:52.749 }' 00:13:52.749 18:44:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:52.749 18:44:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.009 18:44:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:53.009 18:44:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.009 18:44:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.009 18:44:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:53.009 18:44:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.009 18:44:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:13:53.009 18:44:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:53.009 18:44:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.009 18:44:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.009 [2024-12-15 18:44:53.402555] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:53.009 18:44:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.009 18:44:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:13:53.009 18:44:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:53.009 18:44:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:53.009 18:44:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:53.009 18:44:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:53.009 18:44:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:53.009 18:44:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:53.009 18:44:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:53.009 18:44:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:53.009 18:44:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:53.009 18:44:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:53.009 18:44:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:53.009 18:44:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.009 18:44:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.009 18:44:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.269 18:44:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:53.269 "name": "Existed_Raid", 00:13:53.269 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:53.269 "strip_size_kb": 64, 00:13:53.269 "state": "configuring", 00:13:53.269 "raid_level": "raid5f", 00:13:53.269 "superblock": false, 00:13:53.269 "num_base_bdevs": 4, 00:13:53.269 "num_base_bdevs_discovered": 3, 00:13:53.269 "num_base_bdevs_operational": 4, 00:13:53.269 "base_bdevs_list": [ 00:13:53.269 { 00:13:53.269 "name": null, 00:13:53.269 "uuid": "a3ba0272-f9c6-4421-a17f-146b819b9cb0", 00:13:53.269 "is_configured": false, 00:13:53.269 "data_offset": 0, 00:13:53.269 "data_size": 65536 00:13:53.269 }, 00:13:53.269 { 00:13:53.269 "name": "BaseBdev2", 00:13:53.269 "uuid": "48ae1dd4-a806-4fad-9fa3-ad11e3cf78ba", 00:13:53.269 "is_configured": true, 00:13:53.269 "data_offset": 0, 00:13:53.269 "data_size": 65536 00:13:53.269 }, 00:13:53.269 { 00:13:53.269 "name": "BaseBdev3", 00:13:53.269 "uuid": "dd1ca3d3-7e84-458f-8387-eb066ed1c347", 00:13:53.269 "is_configured": true, 00:13:53.269 "data_offset": 0, 00:13:53.269 "data_size": 65536 00:13:53.269 }, 00:13:53.269 { 00:13:53.269 "name": "BaseBdev4", 00:13:53.269 "uuid": "624d2a6f-18b9-41ff-8c14-c37e6eb29e6a", 00:13:53.269 "is_configured": true, 00:13:53.269 "data_offset": 0, 00:13:53.269 "data_size": 65536 00:13:53.269 } 00:13:53.269 ] 00:13:53.269 }' 00:13:53.269 18:44:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:53.269 18:44:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.529 18:44:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:53.529 18:44:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:53.529 18:44:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.529 18:44:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.529 18:44:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.529 18:44:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:13:53.529 18:44:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:53.529 18:44:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.529 18:44:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.529 18:44:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:53.529 18:44:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.529 18:44:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u a3ba0272-f9c6-4421-a17f-146b819b9cb0 00:13:53.529 18:44:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.529 18:44:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.529 [2024-12-15 18:44:53.920723] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:53.529 [2024-12-15 18:44:53.920866] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:13:53.529 [2024-12-15 18:44:53.920898] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:13:53.529 [2024-12-15 18:44:53.921196] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:13:53.529 [2024-12-15 18:44:53.921648] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:13:53.529 [2024-12-15 18:44:53.921701] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raNewBaseBdev 00:13:53.529 id_bdev 0x617000006d00 00:13:53.529 [2024-12-15 18:44:53.921912] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:53.529 18:44:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.529 18:44:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:13:53.529 18:44:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:13:53.529 18:44:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:53.529 18:44:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:53.529 18:44:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:53.529 18:44:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:53.529 18:44:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:53.529 18:44:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.529 18:44:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.529 18:44:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.529 18:44:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:53.529 18:44:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.530 18:44:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.530 [ 00:13:53.530 { 00:13:53.530 "name": "NewBaseBdev", 00:13:53.530 "aliases": [ 00:13:53.530 "a3ba0272-f9c6-4421-a17f-146b819b9cb0" 00:13:53.530 ], 00:13:53.530 "product_name": "Malloc disk", 00:13:53.530 "block_size": 512, 00:13:53.530 "num_blocks": 65536, 00:13:53.530 "uuid": "a3ba0272-f9c6-4421-a17f-146b819b9cb0", 00:13:53.530 "assigned_rate_limits": { 00:13:53.530 "rw_ios_per_sec": 0, 00:13:53.530 "rw_mbytes_per_sec": 0, 00:13:53.530 "r_mbytes_per_sec": 0, 00:13:53.530 "w_mbytes_per_sec": 0 00:13:53.530 }, 00:13:53.530 "claimed": true, 00:13:53.530 "claim_type": "exclusive_write", 00:13:53.530 "zoned": false, 00:13:53.530 "supported_io_types": { 00:13:53.530 "read": true, 00:13:53.530 "write": true, 00:13:53.530 "unmap": true, 00:13:53.530 "flush": true, 00:13:53.530 "reset": true, 00:13:53.530 "nvme_admin": false, 00:13:53.530 "nvme_io": false, 00:13:53.530 "nvme_io_md": false, 00:13:53.530 "write_zeroes": true, 00:13:53.530 "zcopy": true, 00:13:53.530 "get_zone_info": false, 00:13:53.530 "zone_management": false, 00:13:53.530 "zone_append": false, 00:13:53.530 "compare": false, 00:13:53.530 "compare_and_write": false, 00:13:53.530 "abort": true, 00:13:53.530 "seek_hole": false, 00:13:53.530 "seek_data": false, 00:13:53.530 "copy": true, 00:13:53.530 "nvme_iov_md": false 00:13:53.530 }, 00:13:53.530 "memory_domains": [ 00:13:53.530 { 00:13:53.530 "dma_device_id": "system", 00:13:53.530 "dma_device_type": 1 00:13:53.530 }, 00:13:53.530 { 00:13:53.530 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:53.530 "dma_device_type": 2 00:13:53.530 } 00:13:53.530 ], 00:13:53.530 "driver_specific": {} 00:13:53.530 } 00:13:53.530 ] 00:13:53.530 18:44:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.530 18:44:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:53.530 18:44:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:13:53.530 18:44:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:53.530 18:44:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:53.530 18:44:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:53.530 18:44:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:53.530 18:44:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:53.530 18:44:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:53.530 18:44:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:53.530 18:44:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:53.530 18:44:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:53.803 18:44:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:53.803 18:44:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:53.803 18:44:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.803 18:44:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.803 18:44:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.803 18:44:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:53.803 "name": "Existed_Raid", 00:13:53.803 "uuid": "8e47b027-b31a-4444-95cb-02312ab7db1e", 00:13:53.803 "strip_size_kb": 64, 00:13:53.803 "state": "online", 00:13:53.803 "raid_level": "raid5f", 00:13:53.803 "superblock": false, 00:13:53.803 "num_base_bdevs": 4, 00:13:53.803 "num_base_bdevs_discovered": 4, 00:13:53.803 "num_base_bdevs_operational": 4, 00:13:53.803 "base_bdevs_list": [ 00:13:53.803 { 00:13:53.803 "name": "NewBaseBdev", 00:13:53.803 "uuid": "a3ba0272-f9c6-4421-a17f-146b819b9cb0", 00:13:53.803 "is_configured": true, 00:13:53.803 "data_offset": 0, 00:13:53.803 "data_size": 65536 00:13:53.803 }, 00:13:53.803 { 00:13:53.803 "name": "BaseBdev2", 00:13:53.803 "uuid": "48ae1dd4-a806-4fad-9fa3-ad11e3cf78ba", 00:13:53.803 "is_configured": true, 00:13:53.803 "data_offset": 0, 00:13:53.803 "data_size": 65536 00:13:53.803 }, 00:13:53.803 { 00:13:53.803 "name": "BaseBdev3", 00:13:53.803 "uuid": "dd1ca3d3-7e84-458f-8387-eb066ed1c347", 00:13:53.803 "is_configured": true, 00:13:53.803 "data_offset": 0, 00:13:53.803 "data_size": 65536 00:13:53.803 }, 00:13:53.803 { 00:13:53.803 "name": "BaseBdev4", 00:13:53.803 "uuid": "624d2a6f-18b9-41ff-8c14-c37e6eb29e6a", 00:13:53.803 "is_configured": true, 00:13:53.803 "data_offset": 0, 00:13:53.803 "data_size": 65536 00:13:53.803 } 00:13:53.803 ] 00:13:53.803 }' 00:13:53.803 18:44:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:53.803 18:44:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.079 18:44:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:13:54.080 18:44:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:54.080 18:44:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:54.080 18:44:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:54.080 18:44:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:54.080 18:44:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:54.080 18:44:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:54.080 18:44:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:54.080 18:44:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.080 18:44:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.080 [2024-12-15 18:44:54.420203] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:54.080 18:44:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.080 18:44:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:54.080 "name": "Existed_Raid", 00:13:54.080 "aliases": [ 00:13:54.080 "8e47b027-b31a-4444-95cb-02312ab7db1e" 00:13:54.080 ], 00:13:54.080 "product_name": "Raid Volume", 00:13:54.080 "block_size": 512, 00:13:54.080 "num_blocks": 196608, 00:13:54.080 "uuid": "8e47b027-b31a-4444-95cb-02312ab7db1e", 00:13:54.080 "assigned_rate_limits": { 00:13:54.080 "rw_ios_per_sec": 0, 00:13:54.080 "rw_mbytes_per_sec": 0, 00:13:54.080 "r_mbytes_per_sec": 0, 00:13:54.080 "w_mbytes_per_sec": 0 00:13:54.080 }, 00:13:54.080 "claimed": false, 00:13:54.080 "zoned": false, 00:13:54.080 "supported_io_types": { 00:13:54.080 "read": true, 00:13:54.080 "write": true, 00:13:54.080 "unmap": false, 00:13:54.080 "flush": false, 00:13:54.080 "reset": true, 00:13:54.080 "nvme_admin": false, 00:13:54.080 "nvme_io": false, 00:13:54.080 "nvme_io_md": false, 00:13:54.080 "write_zeroes": true, 00:13:54.080 "zcopy": false, 00:13:54.080 "get_zone_info": false, 00:13:54.080 "zone_management": false, 00:13:54.080 "zone_append": false, 00:13:54.080 "compare": false, 00:13:54.080 "compare_and_write": false, 00:13:54.080 "abort": false, 00:13:54.080 "seek_hole": false, 00:13:54.080 "seek_data": false, 00:13:54.080 "copy": false, 00:13:54.080 "nvme_iov_md": false 00:13:54.080 }, 00:13:54.080 "driver_specific": { 00:13:54.080 "raid": { 00:13:54.080 "uuid": "8e47b027-b31a-4444-95cb-02312ab7db1e", 00:13:54.080 "strip_size_kb": 64, 00:13:54.080 "state": "online", 00:13:54.080 "raid_level": "raid5f", 00:13:54.080 "superblock": false, 00:13:54.080 "num_base_bdevs": 4, 00:13:54.080 "num_base_bdevs_discovered": 4, 00:13:54.080 "num_base_bdevs_operational": 4, 00:13:54.080 "base_bdevs_list": [ 00:13:54.080 { 00:13:54.080 "name": "NewBaseBdev", 00:13:54.080 "uuid": "a3ba0272-f9c6-4421-a17f-146b819b9cb0", 00:13:54.080 "is_configured": true, 00:13:54.080 "data_offset": 0, 00:13:54.080 "data_size": 65536 00:13:54.080 }, 00:13:54.080 { 00:13:54.080 "name": "BaseBdev2", 00:13:54.080 "uuid": "48ae1dd4-a806-4fad-9fa3-ad11e3cf78ba", 00:13:54.080 "is_configured": true, 00:13:54.080 "data_offset": 0, 00:13:54.080 "data_size": 65536 00:13:54.080 }, 00:13:54.080 { 00:13:54.080 "name": "BaseBdev3", 00:13:54.080 "uuid": "dd1ca3d3-7e84-458f-8387-eb066ed1c347", 00:13:54.080 "is_configured": true, 00:13:54.080 "data_offset": 0, 00:13:54.080 "data_size": 65536 00:13:54.080 }, 00:13:54.080 { 00:13:54.080 "name": "BaseBdev4", 00:13:54.080 "uuid": "624d2a6f-18b9-41ff-8c14-c37e6eb29e6a", 00:13:54.080 "is_configured": true, 00:13:54.080 "data_offset": 0, 00:13:54.080 "data_size": 65536 00:13:54.080 } 00:13:54.080 ] 00:13:54.080 } 00:13:54.080 } 00:13:54.080 }' 00:13:54.080 18:44:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:54.080 18:44:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:13:54.080 BaseBdev2 00:13:54.080 BaseBdev3 00:13:54.080 BaseBdev4' 00:13:54.080 18:44:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:54.340 18:44:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:54.340 18:44:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:54.340 18:44:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:13:54.340 18:44:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.340 18:44:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.340 18:44:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:54.340 18:44:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.340 18:44:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:54.340 18:44:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:54.340 18:44:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:54.340 18:44:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:54.340 18:44:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.340 18:44:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:54.340 18:44:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.340 18:44:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.340 18:44:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:54.341 18:44:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:54.341 18:44:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:54.341 18:44:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:54.341 18:44:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.341 18:44:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.341 18:44:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:54.341 18:44:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.341 18:44:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:54.341 18:44:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:54.341 18:44:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:54.341 18:44:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:54.341 18:44:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:13:54.341 18:44:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.341 18:44:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.341 18:44:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.341 18:44:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:54.341 18:44:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:54.341 18:44:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:54.341 18:44:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.341 18:44:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.341 [2024-12-15 18:44:54.715501] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:54.341 [2024-12-15 18:44:54.715578] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:54.341 [2024-12-15 18:44:54.715677] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:54.341 [2024-12-15 18:44:54.715956] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:54.341 [2024-12-15 18:44:54.716008] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:13:54.341 18:44:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.341 18:44:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 95173 00:13:54.341 18:44:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 95173 ']' 00:13:54.341 18:44:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 95173 00:13:54.341 18:44:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:13:54.341 18:44:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:54.341 18:44:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 95173 00:13:54.341 18:44:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:54.341 18:44:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:54.341 18:44:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 95173' 00:13:54.341 killing process with pid 95173 00:13:54.341 18:44:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 95173 00:13:54.341 [2024-12-15 18:44:54.765549] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:54.341 18:44:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 95173 00:13:54.601 [2024-12-15 18:44:54.807575] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:54.601 18:44:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:13:54.601 00:13:54.601 real 0m9.289s 00:13:54.601 user 0m15.882s 00:13:54.601 sys 0m1.989s 00:13:54.601 18:44:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:54.601 ************************************ 00:13:54.601 END TEST raid5f_state_function_test 00:13:54.601 ************************************ 00:13:54.601 18:44:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.860 18:44:55 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:13:54.860 18:44:55 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:54.860 18:44:55 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:54.860 18:44:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:54.860 ************************************ 00:13:54.860 START TEST raid5f_state_function_test_sb 00:13:54.860 ************************************ 00:13:54.860 18:44:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 true 00:13:54.860 18:44:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:13:54.860 18:44:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:13:54.860 18:44:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:13:54.860 18:44:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:13:54.860 18:44:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:13:54.860 18:44:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:54.860 18:44:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:13:54.860 18:44:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:54.860 18:44:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:54.860 18:44:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:13:54.860 18:44:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:54.860 18:44:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:54.860 18:44:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:13:54.861 18:44:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:54.861 18:44:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:54.861 18:44:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:13:54.861 18:44:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:54.861 18:44:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:54.861 18:44:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:54.861 18:44:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:13:54.861 18:44:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:13:54.861 18:44:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:13:54.861 18:44:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:13:54.861 18:44:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:13:54.861 18:44:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:13:54.861 18:44:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:13:54.861 18:44:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:13:54.861 18:44:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:13:54.861 18:44:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:13:54.861 18:44:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=95824 00:13:54.861 18:44:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:54.861 18:44:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 95824' 00:13:54.861 Process raid pid: 95824 00:13:54.861 18:44:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 95824 00:13:54.861 18:44:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 95824 ']' 00:13:54.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:54.861 18:44:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:54.861 18:44:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:54.861 18:44:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:54.861 18:44:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:54.861 18:44:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.861 [2024-12-15 18:44:55.197791] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:13:54.861 [2024-12-15 18:44:55.197943] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:55.119 [2024-12-15 18:44:55.370579] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:55.119 [2024-12-15 18:44:55.395212] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:13:55.119 [2024-12-15 18:44:55.436788] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:55.119 [2024-12-15 18:44:55.436841] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:55.690 18:44:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:55.690 18:44:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:13:55.690 18:44:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:55.690 18:44:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.690 18:44:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.690 [2024-12-15 18:44:56.019186] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:55.690 [2024-12-15 18:44:56.019240] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:55.690 [2024-12-15 18:44:56.019261] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:55.690 [2024-12-15 18:44:56.019271] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:55.690 [2024-12-15 18:44:56.019277] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:55.690 [2024-12-15 18:44:56.019288] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:55.690 [2024-12-15 18:44:56.019310] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:55.690 [2024-12-15 18:44:56.019319] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:55.690 18:44:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.690 18:44:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:13:55.690 18:44:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:55.690 18:44:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:55.690 18:44:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:55.690 18:44:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:55.690 18:44:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:55.690 18:44:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:55.690 18:44:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:55.690 18:44:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:55.690 18:44:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:55.690 18:44:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.690 18:44:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.690 18:44:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:55.690 18:44:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.690 18:44:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.690 18:44:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:55.690 "name": "Existed_Raid", 00:13:55.690 "uuid": "460046d9-786a-47bf-aecf-7615ce6defec", 00:13:55.690 "strip_size_kb": 64, 00:13:55.690 "state": "configuring", 00:13:55.690 "raid_level": "raid5f", 00:13:55.690 "superblock": true, 00:13:55.690 "num_base_bdevs": 4, 00:13:55.690 "num_base_bdevs_discovered": 0, 00:13:55.690 "num_base_bdevs_operational": 4, 00:13:55.690 "base_bdevs_list": [ 00:13:55.690 { 00:13:55.690 "name": "BaseBdev1", 00:13:55.690 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:55.690 "is_configured": false, 00:13:55.690 "data_offset": 0, 00:13:55.690 "data_size": 0 00:13:55.690 }, 00:13:55.690 { 00:13:55.690 "name": "BaseBdev2", 00:13:55.690 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:55.690 "is_configured": false, 00:13:55.690 "data_offset": 0, 00:13:55.690 "data_size": 0 00:13:55.690 }, 00:13:55.690 { 00:13:55.690 "name": "BaseBdev3", 00:13:55.690 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:55.690 "is_configured": false, 00:13:55.690 "data_offset": 0, 00:13:55.690 "data_size": 0 00:13:55.690 }, 00:13:55.690 { 00:13:55.690 "name": "BaseBdev4", 00:13:55.690 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:55.690 "is_configured": false, 00:13:55.690 "data_offset": 0, 00:13:55.690 "data_size": 0 00:13:55.690 } 00:13:55.690 ] 00:13:55.690 }' 00:13:55.690 18:44:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:55.690 18:44:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.260 18:44:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:56.260 18:44:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.260 18:44:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.260 [2024-12-15 18:44:56.418388] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:56.260 [2024-12-15 18:44:56.418433] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:13:56.260 18:44:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.260 18:44:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:56.260 18:44:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.260 18:44:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.260 [2024-12-15 18:44:56.430398] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:56.260 [2024-12-15 18:44:56.430441] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:56.260 [2024-12-15 18:44:56.430449] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:56.260 [2024-12-15 18:44:56.430458] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:56.260 [2024-12-15 18:44:56.430464] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:56.260 [2024-12-15 18:44:56.430472] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:56.260 [2024-12-15 18:44:56.430477] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:56.260 [2024-12-15 18:44:56.430485] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:56.260 18:44:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.260 18:44:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:56.260 18:44:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.260 18:44:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.260 [2024-12-15 18:44:56.451093] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:56.260 BaseBdev1 00:13:56.260 18:44:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.260 18:44:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:13:56.260 18:44:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:56.260 18:44:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:56.260 18:44:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:56.260 18:44:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:56.260 18:44:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:56.261 18:44:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:56.261 18:44:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.261 18:44:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.261 18:44:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.261 18:44:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:56.261 18:44:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.261 18:44:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.261 [ 00:13:56.261 { 00:13:56.261 "name": "BaseBdev1", 00:13:56.261 "aliases": [ 00:13:56.261 "3ddae699-4873-4735-b65f-99ae755183d4" 00:13:56.261 ], 00:13:56.261 "product_name": "Malloc disk", 00:13:56.261 "block_size": 512, 00:13:56.261 "num_blocks": 65536, 00:13:56.261 "uuid": "3ddae699-4873-4735-b65f-99ae755183d4", 00:13:56.261 "assigned_rate_limits": { 00:13:56.261 "rw_ios_per_sec": 0, 00:13:56.261 "rw_mbytes_per_sec": 0, 00:13:56.261 "r_mbytes_per_sec": 0, 00:13:56.261 "w_mbytes_per_sec": 0 00:13:56.261 }, 00:13:56.261 "claimed": true, 00:13:56.261 "claim_type": "exclusive_write", 00:13:56.261 "zoned": false, 00:13:56.261 "supported_io_types": { 00:13:56.261 "read": true, 00:13:56.261 "write": true, 00:13:56.261 "unmap": true, 00:13:56.261 "flush": true, 00:13:56.261 "reset": true, 00:13:56.261 "nvme_admin": false, 00:13:56.261 "nvme_io": false, 00:13:56.261 "nvme_io_md": false, 00:13:56.261 "write_zeroes": true, 00:13:56.261 "zcopy": true, 00:13:56.261 "get_zone_info": false, 00:13:56.261 "zone_management": false, 00:13:56.261 "zone_append": false, 00:13:56.261 "compare": false, 00:13:56.261 "compare_and_write": false, 00:13:56.261 "abort": true, 00:13:56.261 "seek_hole": false, 00:13:56.261 "seek_data": false, 00:13:56.261 "copy": true, 00:13:56.261 "nvme_iov_md": false 00:13:56.261 }, 00:13:56.261 "memory_domains": [ 00:13:56.261 { 00:13:56.261 "dma_device_id": "system", 00:13:56.261 "dma_device_type": 1 00:13:56.261 }, 00:13:56.261 { 00:13:56.261 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:56.261 "dma_device_type": 2 00:13:56.261 } 00:13:56.261 ], 00:13:56.261 "driver_specific": {} 00:13:56.261 } 00:13:56.261 ] 00:13:56.261 18:44:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.261 18:44:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:56.261 18:44:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:13:56.261 18:44:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:56.261 18:44:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:56.261 18:44:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:56.261 18:44:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:56.261 18:44:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:56.261 18:44:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:56.261 18:44:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:56.261 18:44:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:56.261 18:44:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:56.261 18:44:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:56.261 18:44:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.261 18:44:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.261 18:44:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.261 18:44:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.261 18:44:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:56.261 "name": "Existed_Raid", 00:13:56.261 "uuid": "44d38cc7-89a4-4ad8-90d2-6bd369aad8ce", 00:13:56.261 "strip_size_kb": 64, 00:13:56.261 "state": "configuring", 00:13:56.261 "raid_level": "raid5f", 00:13:56.261 "superblock": true, 00:13:56.261 "num_base_bdevs": 4, 00:13:56.261 "num_base_bdevs_discovered": 1, 00:13:56.261 "num_base_bdevs_operational": 4, 00:13:56.261 "base_bdevs_list": [ 00:13:56.261 { 00:13:56.261 "name": "BaseBdev1", 00:13:56.261 "uuid": "3ddae699-4873-4735-b65f-99ae755183d4", 00:13:56.261 "is_configured": true, 00:13:56.261 "data_offset": 2048, 00:13:56.261 "data_size": 63488 00:13:56.261 }, 00:13:56.261 { 00:13:56.261 "name": "BaseBdev2", 00:13:56.261 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:56.261 "is_configured": false, 00:13:56.261 "data_offset": 0, 00:13:56.261 "data_size": 0 00:13:56.261 }, 00:13:56.261 { 00:13:56.261 "name": "BaseBdev3", 00:13:56.261 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:56.261 "is_configured": false, 00:13:56.261 "data_offset": 0, 00:13:56.261 "data_size": 0 00:13:56.261 }, 00:13:56.261 { 00:13:56.261 "name": "BaseBdev4", 00:13:56.261 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:56.261 "is_configured": false, 00:13:56.261 "data_offset": 0, 00:13:56.261 "data_size": 0 00:13:56.261 } 00:13:56.261 ] 00:13:56.261 }' 00:13:56.261 18:44:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:56.261 18:44:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.521 18:44:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:56.521 18:44:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.521 18:44:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.521 [2024-12-15 18:44:56.910312] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:56.521 [2024-12-15 18:44:56.910355] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:13:56.521 18:44:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.521 18:44:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:56.521 18:44:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.521 18:44:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.521 [2024-12-15 18:44:56.922378] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:56.521 [2024-12-15 18:44:56.924196] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:56.521 [2024-12-15 18:44:56.924237] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:56.521 [2024-12-15 18:44:56.924246] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:56.521 [2024-12-15 18:44:56.924270] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:56.521 [2024-12-15 18:44:56.924277] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:56.521 [2024-12-15 18:44:56.924284] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:56.521 18:44:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.521 18:44:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:13:56.521 18:44:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:56.521 18:44:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:13:56.521 18:44:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:56.521 18:44:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:56.521 18:44:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:56.521 18:44:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:56.521 18:44:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:56.521 18:44:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:56.522 18:44:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:56.522 18:44:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:56.522 18:44:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:56.522 18:44:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.522 18:44:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:56.522 18:44:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.522 18:44:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.522 18:44:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.782 18:44:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:56.782 "name": "Existed_Raid", 00:13:56.782 "uuid": "1da48b95-f882-4603-8f8e-e4a655470d00", 00:13:56.782 "strip_size_kb": 64, 00:13:56.782 "state": "configuring", 00:13:56.782 "raid_level": "raid5f", 00:13:56.782 "superblock": true, 00:13:56.782 "num_base_bdevs": 4, 00:13:56.782 "num_base_bdevs_discovered": 1, 00:13:56.782 "num_base_bdevs_operational": 4, 00:13:56.782 "base_bdevs_list": [ 00:13:56.782 { 00:13:56.782 "name": "BaseBdev1", 00:13:56.782 "uuid": "3ddae699-4873-4735-b65f-99ae755183d4", 00:13:56.782 "is_configured": true, 00:13:56.782 "data_offset": 2048, 00:13:56.782 "data_size": 63488 00:13:56.782 }, 00:13:56.782 { 00:13:56.782 "name": "BaseBdev2", 00:13:56.782 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:56.782 "is_configured": false, 00:13:56.782 "data_offset": 0, 00:13:56.782 "data_size": 0 00:13:56.782 }, 00:13:56.782 { 00:13:56.782 "name": "BaseBdev3", 00:13:56.782 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:56.782 "is_configured": false, 00:13:56.782 "data_offset": 0, 00:13:56.782 "data_size": 0 00:13:56.782 }, 00:13:56.782 { 00:13:56.782 "name": "BaseBdev4", 00:13:56.782 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:56.782 "is_configured": false, 00:13:56.782 "data_offset": 0, 00:13:56.782 "data_size": 0 00:13:56.782 } 00:13:56.782 ] 00:13:56.782 }' 00:13:56.782 18:44:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:56.782 18:44:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.042 18:44:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:57.042 18:44:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.042 18:44:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.042 [2024-12-15 18:44:57.396411] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:57.042 BaseBdev2 00:13:57.042 18:44:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.042 18:44:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:57.042 18:44:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:57.042 18:44:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:57.042 18:44:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:57.042 18:44:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:57.042 18:44:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:57.042 18:44:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:57.042 18:44:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.042 18:44:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.042 18:44:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.042 18:44:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:57.042 18:44:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.042 18:44:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.042 [ 00:13:57.042 { 00:13:57.042 "name": "BaseBdev2", 00:13:57.042 "aliases": [ 00:13:57.042 "3d5f49ab-ec41-4c3d-a7b3-6d7f99b42dcd" 00:13:57.042 ], 00:13:57.042 "product_name": "Malloc disk", 00:13:57.042 "block_size": 512, 00:13:57.042 "num_blocks": 65536, 00:13:57.042 "uuid": "3d5f49ab-ec41-4c3d-a7b3-6d7f99b42dcd", 00:13:57.042 "assigned_rate_limits": { 00:13:57.042 "rw_ios_per_sec": 0, 00:13:57.042 "rw_mbytes_per_sec": 0, 00:13:57.042 "r_mbytes_per_sec": 0, 00:13:57.042 "w_mbytes_per_sec": 0 00:13:57.042 }, 00:13:57.042 "claimed": true, 00:13:57.042 "claim_type": "exclusive_write", 00:13:57.042 "zoned": false, 00:13:57.042 "supported_io_types": { 00:13:57.042 "read": true, 00:13:57.042 "write": true, 00:13:57.042 "unmap": true, 00:13:57.042 "flush": true, 00:13:57.042 "reset": true, 00:13:57.042 "nvme_admin": false, 00:13:57.042 "nvme_io": false, 00:13:57.042 "nvme_io_md": false, 00:13:57.042 "write_zeroes": true, 00:13:57.042 "zcopy": true, 00:13:57.042 "get_zone_info": false, 00:13:57.042 "zone_management": false, 00:13:57.042 "zone_append": false, 00:13:57.042 "compare": false, 00:13:57.042 "compare_and_write": false, 00:13:57.042 "abort": true, 00:13:57.042 "seek_hole": false, 00:13:57.042 "seek_data": false, 00:13:57.042 "copy": true, 00:13:57.042 "nvme_iov_md": false 00:13:57.042 }, 00:13:57.042 "memory_domains": [ 00:13:57.042 { 00:13:57.042 "dma_device_id": "system", 00:13:57.042 "dma_device_type": 1 00:13:57.042 }, 00:13:57.042 { 00:13:57.042 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:57.042 "dma_device_type": 2 00:13:57.042 } 00:13:57.042 ], 00:13:57.042 "driver_specific": {} 00:13:57.042 } 00:13:57.042 ] 00:13:57.042 18:44:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.042 18:44:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:57.042 18:44:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:57.042 18:44:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:57.042 18:44:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:13:57.042 18:44:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:57.042 18:44:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:57.042 18:44:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:57.042 18:44:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:57.042 18:44:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:57.042 18:44:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:57.042 18:44:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:57.042 18:44:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:57.042 18:44:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:57.042 18:44:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.042 18:44:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:57.042 18:44:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.042 18:44:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.042 18:44:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.302 18:44:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:57.303 "name": "Existed_Raid", 00:13:57.303 "uuid": "1da48b95-f882-4603-8f8e-e4a655470d00", 00:13:57.303 "strip_size_kb": 64, 00:13:57.303 "state": "configuring", 00:13:57.303 "raid_level": "raid5f", 00:13:57.303 "superblock": true, 00:13:57.303 "num_base_bdevs": 4, 00:13:57.303 "num_base_bdevs_discovered": 2, 00:13:57.303 "num_base_bdevs_operational": 4, 00:13:57.303 "base_bdevs_list": [ 00:13:57.303 { 00:13:57.303 "name": "BaseBdev1", 00:13:57.303 "uuid": "3ddae699-4873-4735-b65f-99ae755183d4", 00:13:57.303 "is_configured": true, 00:13:57.303 "data_offset": 2048, 00:13:57.303 "data_size": 63488 00:13:57.303 }, 00:13:57.303 { 00:13:57.303 "name": "BaseBdev2", 00:13:57.303 "uuid": "3d5f49ab-ec41-4c3d-a7b3-6d7f99b42dcd", 00:13:57.303 "is_configured": true, 00:13:57.303 "data_offset": 2048, 00:13:57.303 "data_size": 63488 00:13:57.303 }, 00:13:57.303 { 00:13:57.303 "name": "BaseBdev3", 00:13:57.303 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:57.303 "is_configured": false, 00:13:57.303 "data_offset": 0, 00:13:57.303 "data_size": 0 00:13:57.303 }, 00:13:57.303 { 00:13:57.303 "name": "BaseBdev4", 00:13:57.303 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:57.303 "is_configured": false, 00:13:57.303 "data_offset": 0, 00:13:57.303 "data_size": 0 00:13:57.303 } 00:13:57.303 ] 00:13:57.303 }' 00:13:57.303 18:44:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:57.303 18:44:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.563 18:44:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:57.563 18:44:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.563 18:44:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.563 [2024-12-15 18:44:57.893438] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:57.563 BaseBdev3 00:13:57.563 18:44:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.563 18:44:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:13:57.563 18:44:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:13:57.563 18:44:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:57.563 18:44:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:57.563 18:44:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:57.563 18:44:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:57.563 18:44:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:57.563 18:44:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.563 18:44:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.563 18:44:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.563 18:44:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:57.563 18:44:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.563 18:44:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.563 [ 00:13:57.563 { 00:13:57.563 "name": "BaseBdev3", 00:13:57.563 "aliases": [ 00:13:57.563 "1a550e70-496b-4a7d-af80-4f43bce28a89" 00:13:57.563 ], 00:13:57.563 "product_name": "Malloc disk", 00:13:57.563 "block_size": 512, 00:13:57.563 "num_blocks": 65536, 00:13:57.563 "uuid": "1a550e70-496b-4a7d-af80-4f43bce28a89", 00:13:57.563 "assigned_rate_limits": { 00:13:57.563 "rw_ios_per_sec": 0, 00:13:57.563 "rw_mbytes_per_sec": 0, 00:13:57.563 "r_mbytes_per_sec": 0, 00:13:57.563 "w_mbytes_per_sec": 0 00:13:57.563 }, 00:13:57.563 "claimed": true, 00:13:57.563 "claim_type": "exclusive_write", 00:13:57.563 "zoned": false, 00:13:57.563 "supported_io_types": { 00:13:57.563 "read": true, 00:13:57.563 "write": true, 00:13:57.563 "unmap": true, 00:13:57.563 "flush": true, 00:13:57.563 "reset": true, 00:13:57.563 "nvme_admin": false, 00:13:57.563 "nvme_io": false, 00:13:57.563 "nvme_io_md": false, 00:13:57.563 "write_zeroes": true, 00:13:57.563 "zcopy": true, 00:13:57.563 "get_zone_info": false, 00:13:57.563 "zone_management": false, 00:13:57.563 "zone_append": false, 00:13:57.563 "compare": false, 00:13:57.563 "compare_and_write": false, 00:13:57.563 "abort": true, 00:13:57.563 "seek_hole": false, 00:13:57.563 "seek_data": false, 00:13:57.563 "copy": true, 00:13:57.563 "nvme_iov_md": false 00:13:57.563 }, 00:13:57.563 "memory_domains": [ 00:13:57.563 { 00:13:57.563 "dma_device_id": "system", 00:13:57.563 "dma_device_type": 1 00:13:57.563 }, 00:13:57.563 { 00:13:57.563 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:57.563 "dma_device_type": 2 00:13:57.563 } 00:13:57.563 ], 00:13:57.563 "driver_specific": {} 00:13:57.563 } 00:13:57.563 ] 00:13:57.563 18:44:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.563 18:44:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:57.563 18:44:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:57.563 18:44:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:57.563 18:44:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:13:57.563 18:44:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:57.563 18:44:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:57.563 18:44:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:57.563 18:44:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:57.563 18:44:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:57.563 18:44:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:57.563 18:44:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:57.563 18:44:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:57.563 18:44:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:57.563 18:44:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.563 18:44:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:57.563 18:44:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.563 18:44:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.563 18:44:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.563 18:44:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:57.563 "name": "Existed_Raid", 00:13:57.563 "uuid": "1da48b95-f882-4603-8f8e-e4a655470d00", 00:13:57.563 "strip_size_kb": 64, 00:13:57.563 "state": "configuring", 00:13:57.563 "raid_level": "raid5f", 00:13:57.563 "superblock": true, 00:13:57.563 "num_base_bdevs": 4, 00:13:57.563 "num_base_bdevs_discovered": 3, 00:13:57.563 "num_base_bdevs_operational": 4, 00:13:57.563 "base_bdevs_list": [ 00:13:57.563 { 00:13:57.563 "name": "BaseBdev1", 00:13:57.563 "uuid": "3ddae699-4873-4735-b65f-99ae755183d4", 00:13:57.563 "is_configured": true, 00:13:57.563 "data_offset": 2048, 00:13:57.563 "data_size": 63488 00:13:57.563 }, 00:13:57.563 { 00:13:57.563 "name": "BaseBdev2", 00:13:57.563 "uuid": "3d5f49ab-ec41-4c3d-a7b3-6d7f99b42dcd", 00:13:57.563 "is_configured": true, 00:13:57.563 "data_offset": 2048, 00:13:57.563 "data_size": 63488 00:13:57.563 }, 00:13:57.563 { 00:13:57.563 "name": "BaseBdev3", 00:13:57.563 "uuid": "1a550e70-496b-4a7d-af80-4f43bce28a89", 00:13:57.563 "is_configured": true, 00:13:57.563 "data_offset": 2048, 00:13:57.563 "data_size": 63488 00:13:57.563 }, 00:13:57.564 { 00:13:57.564 "name": "BaseBdev4", 00:13:57.564 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:57.564 "is_configured": false, 00:13:57.564 "data_offset": 0, 00:13:57.564 "data_size": 0 00:13:57.564 } 00:13:57.564 ] 00:13:57.564 }' 00:13:57.564 18:44:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:57.564 18:44:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.134 18:44:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:13:58.134 18:44:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.134 18:44:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.134 [2024-12-15 18:44:58.411573] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:58.134 [2024-12-15 18:44:58.411916] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:13:58.134 [2024-12-15 18:44:58.411969] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:13:58.134 [2024-12-15 18:44:58.412281] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:13:58.134 BaseBdev4 00:13:58.134 [2024-12-15 18:44:58.412782] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:13:58.134 [2024-12-15 18:44:58.412863] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:13:58.134 [2024-12-15 18:44:58.413039] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:58.134 18:44:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.134 18:44:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:13:58.134 18:44:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:13:58.134 18:44:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:58.134 18:44:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:58.134 18:44:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:58.134 18:44:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:58.134 18:44:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:58.134 18:44:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.134 18:44:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.134 18:44:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.134 18:44:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:58.134 18:44:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.134 18:44:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.134 [ 00:13:58.134 { 00:13:58.134 "name": "BaseBdev4", 00:13:58.134 "aliases": [ 00:13:58.134 "eefb9d5d-0fe0-412c-8e38-cad2d6e4a76a" 00:13:58.134 ], 00:13:58.134 "product_name": "Malloc disk", 00:13:58.134 "block_size": 512, 00:13:58.134 "num_blocks": 65536, 00:13:58.134 "uuid": "eefb9d5d-0fe0-412c-8e38-cad2d6e4a76a", 00:13:58.134 "assigned_rate_limits": { 00:13:58.134 "rw_ios_per_sec": 0, 00:13:58.134 "rw_mbytes_per_sec": 0, 00:13:58.134 "r_mbytes_per_sec": 0, 00:13:58.134 "w_mbytes_per_sec": 0 00:13:58.134 }, 00:13:58.134 "claimed": true, 00:13:58.134 "claim_type": "exclusive_write", 00:13:58.134 "zoned": false, 00:13:58.134 "supported_io_types": { 00:13:58.134 "read": true, 00:13:58.134 "write": true, 00:13:58.134 "unmap": true, 00:13:58.134 "flush": true, 00:13:58.134 "reset": true, 00:13:58.134 "nvme_admin": false, 00:13:58.134 "nvme_io": false, 00:13:58.134 "nvme_io_md": false, 00:13:58.134 "write_zeroes": true, 00:13:58.134 "zcopy": true, 00:13:58.134 "get_zone_info": false, 00:13:58.134 "zone_management": false, 00:13:58.134 "zone_append": false, 00:13:58.134 "compare": false, 00:13:58.134 "compare_and_write": false, 00:13:58.134 "abort": true, 00:13:58.134 "seek_hole": false, 00:13:58.134 "seek_data": false, 00:13:58.134 "copy": true, 00:13:58.134 "nvme_iov_md": false 00:13:58.134 }, 00:13:58.134 "memory_domains": [ 00:13:58.134 { 00:13:58.134 "dma_device_id": "system", 00:13:58.134 "dma_device_type": 1 00:13:58.134 }, 00:13:58.134 { 00:13:58.134 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:58.134 "dma_device_type": 2 00:13:58.134 } 00:13:58.134 ], 00:13:58.134 "driver_specific": {} 00:13:58.134 } 00:13:58.134 ] 00:13:58.134 18:44:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.134 18:44:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:58.134 18:44:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:58.134 18:44:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:58.134 18:44:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:13:58.134 18:44:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:58.134 18:44:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:58.134 18:44:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:58.134 18:44:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:58.134 18:44:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:58.134 18:44:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:58.134 18:44:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:58.134 18:44:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:58.134 18:44:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:58.134 18:44:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:58.134 18:44:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.134 18:44:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.134 18:44:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.134 18:44:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.134 18:44:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:58.134 "name": "Existed_Raid", 00:13:58.134 "uuid": "1da48b95-f882-4603-8f8e-e4a655470d00", 00:13:58.134 "strip_size_kb": 64, 00:13:58.134 "state": "online", 00:13:58.134 "raid_level": "raid5f", 00:13:58.134 "superblock": true, 00:13:58.134 "num_base_bdevs": 4, 00:13:58.134 "num_base_bdevs_discovered": 4, 00:13:58.134 "num_base_bdevs_operational": 4, 00:13:58.134 "base_bdevs_list": [ 00:13:58.134 { 00:13:58.134 "name": "BaseBdev1", 00:13:58.134 "uuid": "3ddae699-4873-4735-b65f-99ae755183d4", 00:13:58.134 "is_configured": true, 00:13:58.134 "data_offset": 2048, 00:13:58.134 "data_size": 63488 00:13:58.134 }, 00:13:58.134 { 00:13:58.134 "name": "BaseBdev2", 00:13:58.134 "uuid": "3d5f49ab-ec41-4c3d-a7b3-6d7f99b42dcd", 00:13:58.134 "is_configured": true, 00:13:58.134 "data_offset": 2048, 00:13:58.134 "data_size": 63488 00:13:58.134 }, 00:13:58.134 { 00:13:58.134 "name": "BaseBdev3", 00:13:58.134 "uuid": "1a550e70-496b-4a7d-af80-4f43bce28a89", 00:13:58.134 "is_configured": true, 00:13:58.134 "data_offset": 2048, 00:13:58.134 "data_size": 63488 00:13:58.134 }, 00:13:58.134 { 00:13:58.134 "name": "BaseBdev4", 00:13:58.134 "uuid": "eefb9d5d-0fe0-412c-8e38-cad2d6e4a76a", 00:13:58.134 "is_configured": true, 00:13:58.134 "data_offset": 2048, 00:13:58.134 "data_size": 63488 00:13:58.134 } 00:13:58.134 ] 00:13:58.134 }' 00:13:58.134 18:44:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:58.134 18:44:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.704 18:44:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:58.705 18:44:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:58.705 18:44:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:58.705 18:44:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:58.705 18:44:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:13:58.705 18:44:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:58.705 18:44:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:58.705 18:44:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:58.705 18:44:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.705 18:44:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.705 [2024-12-15 18:44:58.871066] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:58.705 18:44:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.705 18:44:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:58.705 "name": "Existed_Raid", 00:13:58.705 "aliases": [ 00:13:58.705 "1da48b95-f882-4603-8f8e-e4a655470d00" 00:13:58.705 ], 00:13:58.705 "product_name": "Raid Volume", 00:13:58.705 "block_size": 512, 00:13:58.705 "num_blocks": 190464, 00:13:58.705 "uuid": "1da48b95-f882-4603-8f8e-e4a655470d00", 00:13:58.705 "assigned_rate_limits": { 00:13:58.705 "rw_ios_per_sec": 0, 00:13:58.705 "rw_mbytes_per_sec": 0, 00:13:58.705 "r_mbytes_per_sec": 0, 00:13:58.705 "w_mbytes_per_sec": 0 00:13:58.705 }, 00:13:58.705 "claimed": false, 00:13:58.705 "zoned": false, 00:13:58.705 "supported_io_types": { 00:13:58.705 "read": true, 00:13:58.705 "write": true, 00:13:58.705 "unmap": false, 00:13:58.705 "flush": false, 00:13:58.705 "reset": true, 00:13:58.705 "nvme_admin": false, 00:13:58.705 "nvme_io": false, 00:13:58.705 "nvme_io_md": false, 00:13:58.705 "write_zeroes": true, 00:13:58.705 "zcopy": false, 00:13:58.705 "get_zone_info": false, 00:13:58.705 "zone_management": false, 00:13:58.705 "zone_append": false, 00:13:58.705 "compare": false, 00:13:58.705 "compare_and_write": false, 00:13:58.705 "abort": false, 00:13:58.705 "seek_hole": false, 00:13:58.705 "seek_data": false, 00:13:58.705 "copy": false, 00:13:58.705 "nvme_iov_md": false 00:13:58.705 }, 00:13:58.705 "driver_specific": { 00:13:58.705 "raid": { 00:13:58.705 "uuid": "1da48b95-f882-4603-8f8e-e4a655470d00", 00:13:58.705 "strip_size_kb": 64, 00:13:58.705 "state": "online", 00:13:58.705 "raid_level": "raid5f", 00:13:58.705 "superblock": true, 00:13:58.705 "num_base_bdevs": 4, 00:13:58.705 "num_base_bdevs_discovered": 4, 00:13:58.705 "num_base_bdevs_operational": 4, 00:13:58.705 "base_bdevs_list": [ 00:13:58.705 { 00:13:58.705 "name": "BaseBdev1", 00:13:58.705 "uuid": "3ddae699-4873-4735-b65f-99ae755183d4", 00:13:58.705 "is_configured": true, 00:13:58.705 "data_offset": 2048, 00:13:58.705 "data_size": 63488 00:13:58.705 }, 00:13:58.705 { 00:13:58.705 "name": "BaseBdev2", 00:13:58.705 "uuid": "3d5f49ab-ec41-4c3d-a7b3-6d7f99b42dcd", 00:13:58.705 "is_configured": true, 00:13:58.705 "data_offset": 2048, 00:13:58.705 "data_size": 63488 00:13:58.705 }, 00:13:58.705 { 00:13:58.705 "name": "BaseBdev3", 00:13:58.705 "uuid": "1a550e70-496b-4a7d-af80-4f43bce28a89", 00:13:58.705 "is_configured": true, 00:13:58.705 "data_offset": 2048, 00:13:58.705 "data_size": 63488 00:13:58.705 }, 00:13:58.705 { 00:13:58.705 "name": "BaseBdev4", 00:13:58.705 "uuid": "eefb9d5d-0fe0-412c-8e38-cad2d6e4a76a", 00:13:58.705 "is_configured": true, 00:13:58.705 "data_offset": 2048, 00:13:58.705 "data_size": 63488 00:13:58.705 } 00:13:58.705 ] 00:13:58.705 } 00:13:58.705 } 00:13:58.705 }' 00:13:58.705 18:44:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:58.705 18:44:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:58.705 BaseBdev2 00:13:58.705 BaseBdev3 00:13:58.705 BaseBdev4' 00:13:58.705 18:44:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:58.705 18:44:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:58.705 18:44:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:58.705 18:44:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:58.705 18:44:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:58.705 18:44:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.705 18:44:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.705 18:44:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.705 18:44:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:58.705 18:44:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:58.705 18:44:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:58.705 18:44:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:58.705 18:44:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:58.705 18:44:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.705 18:44:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.705 18:44:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.705 18:44:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:58.705 18:44:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:58.705 18:44:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:58.705 18:44:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:58.705 18:44:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:58.705 18:44:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.705 18:44:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.705 18:44:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.705 18:44:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:58.705 18:44:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:58.705 18:44:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:58.705 18:44:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:13:58.705 18:44:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.705 18:44:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.705 18:44:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:58.705 18:44:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.965 18:44:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:58.965 18:44:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:58.965 18:44:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:58.965 18:44:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.965 18:44:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.965 [2024-12-15 18:44:59.166369] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:58.965 18:44:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.965 18:44:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:58.965 18:44:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:13:58.965 18:44:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:58.965 18:44:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:13:58.965 18:44:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:13:58.965 18:44:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:13:58.965 18:44:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:58.965 18:44:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:58.965 18:44:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:58.965 18:44:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:58.965 18:44:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:58.965 18:44:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:58.965 18:44:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:58.965 18:44:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:58.965 18:44:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:58.965 18:44:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.965 18:44:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:58.965 18:44:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.965 18:44:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.965 18:44:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.965 18:44:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:58.965 "name": "Existed_Raid", 00:13:58.965 "uuid": "1da48b95-f882-4603-8f8e-e4a655470d00", 00:13:58.965 "strip_size_kb": 64, 00:13:58.965 "state": "online", 00:13:58.965 "raid_level": "raid5f", 00:13:58.965 "superblock": true, 00:13:58.965 "num_base_bdevs": 4, 00:13:58.965 "num_base_bdevs_discovered": 3, 00:13:58.965 "num_base_bdevs_operational": 3, 00:13:58.965 "base_bdevs_list": [ 00:13:58.965 { 00:13:58.965 "name": null, 00:13:58.965 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:58.966 "is_configured": false, 00:13:58.966 "data_offset": 0, 00:13:58.966 "data_size": 63488 00:13:58.966 }, 00:13:58.966 { 00:13:58.966 "name": "BaseBdev2", 00:13:58.966 "uuid": "3d5f49ab-ec41-4c3d-a7b3-6d7f99b42dcd", 00:13:58.966 "is_configured": true, 00:13:58.966 "data_offset": 2048, 00:13:58.966 "data_size": 63488 00:13:58.966 }, 00:13:58.966 { 00:13:58.966 "name": "BaseBdev3", 00:13:58.966 "uuid": "1a550e70-496b-4a7d-af80-4f43bce28a89", 00:13:58.966 "is_configured": true, 00:13:58.966 "data_offset": 2048, 00:13:58.966 "data_size": 63488 00:13:58.966 }, 00:13:58.966 { 00:13:58.966 "name": "BaseBdev4", 00:13:58.966 "uuid": "eefb9d5d-0fe0-412c-8e38-cad2d6e4a76a", 00:13:58.966 "is_configured": true, 00:13:58.966 "data_offset": 2048, 00:13:58.966 "data_size": 63488 00:13:58.966 } 00:13:58.966 ] 00:13:58.966 }' 00:13:58.966 18:44:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:58.966 18:44:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.226 18:44:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:13:59.226 18:44:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:59.226 18:44:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:59.226 18:44:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:59.226 18:44:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.226 18:44:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.226 18:44:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.226 18:44:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:59.226 18:44:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:59.226 18:44:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:13:59.226 18:44:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.226 18:44:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.226 [2024-12-15 18:44:59.660946] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:59.226 [2024-12-15 18:44:59.661150] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:59.486 [2024-12-15 18:44:59.672691] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:59.486 18:44:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.486 18:44:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:59.486 18:44:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:59.486 18:44:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:59.486 18:44:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:59.486 18:44:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.486 18:44:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.486 18:44:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.486 18:44:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:59.486 18:44:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:59.486 18:44:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:13:59.486 18:44:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.486 18:44:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.486 [2024-12-15 18:44:59.716611] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:59.486 18:44:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.486 18:44:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:59.486 18:44:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:59.486 18:44:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:59.486 18:44:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.486 18:44:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.486 18:44:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:59.486 18:44:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.486 18:44:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:59.486 18:44:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:59.486 18:44:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:13:59.486 18:44:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.486 18:44:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.486 [2024-12-15 18:44:59.783656] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:13:59.486 [2024-12-15 18:44:59.783698] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:13:59.486 18:44:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.486 18:44:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:59.486 18:44:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:59.486 18:44:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:13:59.486 18:44:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:59.486 18:44:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.486 18:44:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.486 18:44:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.486 18:44:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:13:59.486 18:44:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:13:59.486 18:44:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:13:59.486 18:44:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:13:59.486 18:44:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:59.486 18:44:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:59.486 18:44:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.487 18:44:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.487 BaseBdev2 00:13:59.487 18:44:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.487 18:44:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:13:59.487 18:44:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:59.487 18:44:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:59.487 18:44:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:59.487 18:44:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:59.487 18:44:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:59.487 18:44:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:59.487 18:44:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.487 18:44:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.487 18:44:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.487 18:44:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:59.487 18:44:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.487 18:44:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.487 [ 00:13:59.487 { 00:13:59.487 "name": "BaseBdev2", 00:13:59.487 "aliases": [ 00:13:59.487 "c2a5bcf6-53b0-4d67-8adf-66ebcd9c6125" 00:13:59.487 ], 00:13:59.487 "product_name": "Malloc disk", 00:13:59.487 "block_size": 512, 00:13:59.487 "num_blocks": 65536, 00:13:59.487 "uuid": "c2a5bcf6-53b0-4d67-8adf-66ebcd9c6125", 00:13:59.487 "assigned_rate_limits": { 00:13:59.487 "rw_ios_per_sec": 0, 00:13:59.487 "rw_mbytes_per_sec": 0, 00:13:59.487 "r_mbytes_per_sec": 0, 00:13:59.487 "w_mbytes_per_sec": 0 00:13:59.487 }, 00:13:59.487 "claimed": false, 00:13:59.487 "zoned": false, 00:13:59.487 "supported_io_types": { 00:13:59.487 "read": true, 00:13:59.487 "write": true, 00:13:59.487 "unmap": true, 00:13:59.487 "flush": true, 00:13:59.487 "reset": true, 00:13:59.487 "nvme_admin": false, 00:13:59.487 "nvme_io": false, 00:13:59.487 "nvme_io_md": false, 00:13:59.487 "write_zeroes": true, 00:13:59.487 "zcopy": true, 00:13:59.487 "get_zone_info": false, 00:13:59.487 "zone_management": false, 00:13:59.487 "zone_append": false, 00:13:59.487 "compare": false, 00:13:59.487 "compare_and_write": false, 00:13:59.487 "abort": true, 00:13:59.487 "seek_hole": false, 00:13:59.487 "seek_data": false, 00:13:59.487 "copy": true, 00:13:59.487 "nvme_iov_md": false 00:13:59.487 }, 00:13:59.487 "memory_domains": [ 00:13:59.487 { 00:13:59.487 "dma_device_id": "system", 00:13:59.487 "dma_device_type": 1 00:13:59.487 }, 00:13:59.487 { 00:13:59.487 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:59.487 "dma_device_type": 2 00:13:59.487 } 00:13:59.487 ], 00:13:59.487 "driver_specific": {} 00:13:59.487 } 00:13:59.487 ] 00:13:59.487 18:44:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.487 18:44:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:59.487 18:44:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:59.487 18:44:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:59.487 18:44:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:59.487 18:44:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.487 18:44:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.487 BaseBdev3 00:13:59.487 18:44:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.487 18:44:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:13:59.487 18:44:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:13:59.487 18:44:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:59.487 18:44:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:59.487 18:44:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:59.487 18:44:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:59.487 18:44:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:59.487 18:44:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.487 18:44:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.747 18:44:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.747 18:44:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:59.747 18:44:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.747 18:44:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.747 [ 00:13:59.747 { 00:13:59.747 "name": "BaseBdev3", 00:13:59.747 "aliases": [ 00:13:59.747 "f6f52498-3702-4565-bb65-4c5e270405a4" 00:13:59.747 ], 00:13:59.747 "product_name": "Malloc disk", 00:13:59.747 "block_size": 512, 00:13:59.747 "num_blocks": 65536, 00:13:59.748 "uuid": "f6f52498-3702-4565-bb65-4c5e270405a4", 00:13:59.748 "assigned_rate_limits": { 00:13:59.748 "rw_ios_per_sec": 0, 00:13:59.748 "rw_mbytes_per_sec": 0, 00:13:59.748 "r_mbytes_per_sec": 0, 00:13:59.748 "w_mbytes_per_sec": 0 00:13:59.748 }, 00:13:59.748 "claimed": false, 00:13:59.748 "zoned": false, 00:13:59.748 "supported_io_types": { 00:13:59.748 "read": true, 00:13:59.748 "write": true, 00:13:59.748 "unmap": true, 00:13:59.748 "flush": true, 00:13:59.748 "reset": true, 00:13:59.748 "nvme_admin": false, 00:13:59.748 "nvme_io": false, 00:13:59.748 "nvme_io_md": false, 00:13:59.748 "write_zeroes": true, 00:13:59.748 "zcopy": true, 00:13:59.748 "get_zone_info": false, 00:13:59.748 "zone_management": false, 00:13:59.748 "zone_append": false, 00:13:59.748 "compare": false, 00:13:59.748 "compare_and_write": false, 00:13:59.748 "abort": true, 00:13:59.748 "seek_hole": false, 00:13:59.748 "seek_data": false, 00:13:59.748 "copy": true, 00:13:59.748 "nvme_iov_md": false 00:13:59.748 }, 00:13:59.748 "memory_domains": [ 00:13:59.748 { 00:13:59.748 "dma_device_id": "system", 00:13:59.748 "dma_device_type": 1 00:13:59.748 }, 00:13:59.748 { 00:13:59.748 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:59.748 "dma_device_type": 2 00:13:59.748 } 00:13:59.748 ], 00:13:59.748 "driver_specific": {} 00:13:59.748 } 00:13:59.748 ] 00:13:59.748 18:44:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.748 18:44:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:59.748 18:44:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:59.748 18:44:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:59.748 18:44:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:13:59.748 18:44:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.748 18:44:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.748 BaseBdev4 00:13:59.748 18:44:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.748 18:44:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:13:59.748 18:44:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:13:59.748 18:44:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:59.748 18:44:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:59.748 18:44:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:59.748 18:44:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:59.748 18:44:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:59.748 18:44:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.748 18:44:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.748 18:44:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.748 18:44:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:59.748 18:44:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.748 18:44:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.748 [ 00:13:59.748 { 00:13:59.748 "name": "BaseBdev4", 00:13:59.748 "aliases": [ 00:13:59.748 "857a68f2-95c2-48b8-8abb-f89b4da912c7" 00:13:59.748 ], 00:13:59.748 "product_name": "Malloc disk", 00:13:59.748 "block_size": 512, 00:13:59.748 "num_blocks": 65536, 00:13:59.748 "uuid": "857a68f2-95c2-48b8-8abb-f89b4da912c7", 00:13:59.748 "assigned_rate_limits": { 00:13:59.748 "rw_ios_per_sec": 0, 00:13:59.748 "rw_mbytes_per_sec": 0, 00:13:59.748 "r_mbytes_per_sec": 0, 00:13:59.748 "w_mbytes_per_sec": 0 00:13:59.748 }, 00:13:59.748 "claimed": false, 00:13:59.748 "zoned": false, 00:13:59.748 "supported_io_types": { 00:13:59.748 "read": true, 00:13:59.748 "write": true, 00:13:59.748 "unmap": true, 00:13:59.748 "flush": true, 00:13:59.748 "reset": true, 00:13:59.748 "nvme_admin": false, 00:13:59.748 "nvme_io": false, 00:13:59.748 "nvme_io_md": false, 00:13:59.748 "write_zeroes": true, 00:13:59.748 "zcopy": true, 00:13:59.748 "get_zone_info": false, 00:13:59.748 "zone_management": false, 00:13:59.748 "zone_append": false, 00:13:59.748 "compare": false, 00:13:59.748 "compare_and_write": false, 00:13:59.748 "abort": true, 00:13:59.748 "seek_hole": false, 00:13:59.748 "seek_data": false, 00:13:59.748 "copy": true, 00:13:59.748 "nvme_iov_md": false 00:13:59.748 }, 00:13:59.748 "memory_domains": [ 00:13:59.748 { 00:13:59.748 "dma_device_id": "system", 00:13:59.748 "dma_device_type": 1 00:13:59.748 }, 00:13:59.748 { 00:13:59.748 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:59.748 "dma_device_type": 2 00:13:59.748 } 00:13:59.748 ], 00:13:59.748 "driver_specific": {} 00:13:59.748 } 00:13:59.748 ] 00:13:59.748 18:45:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.748 18:45:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:59.748 18:45:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:59.748 18:45:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:59.748 18:45:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:59.748 18:45:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.748 18:45:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.748 [2024-12-15 18:45:00.010735] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:59.748 [2024-12-15 18:45:00.010785] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:59.748 [2024-12-15 18:45:00.010833] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:59.748 [2024-12-15 18:45:00.012604] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:59.748 [2024-12-15 18:45:00.012661] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:59.748 18:45:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.748 18:45:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:13:59.748 18:45:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:59.748 18:45:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:59.748 18:45:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:59.748 18:45:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:59.748 18:45:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:59.748 18:45:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:59.748 18:45:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:59.748 18:45:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:59.748 18:45:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:59.748 18:45:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:59.748 18:45:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.748 18:45:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:59.748 18:45:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.748 18:45:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.748 18:45:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:59.748 "name": "Existed_Raid", 00:13:59.748 "uuid": "e1ded3aa-0dc9-4cfd-b759-95dc6f978463", 00:13:59.748 "strip_size_kb": 64, 00:13:59.748 "state": "configuring", 00:13:59.748 "raid_level": "raid5f", 00:13:59.748 "superblock": true, 00:13:59.748 "num_base_bdevs": 4, 00:13:59.748 "num_base_bdevs_discovered": 3, 00:13:59.748 "num_base_bdevs_operational": 4, 00:13:59.748 "base_bdevs_list": [ 00:13:59.748 { 00:13:59.748 "name": "BaseBdev1", 00:13:59.748 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:59.748 "is_configured": false, 00:13:59.748 "data_offset": 0, 00:13:59.748 "data_size": 0 00:13:59.748 }, 00:13:59.748 { 00:13:59.748 "name": "BaseBdev2", 00:13:59.748 "uuid": "c2a5bcf6-53b0-4d67-8adf-66ebcd9c6125", 00:13:59.748 "is_configured": true, 00:13:59.748 "data_offset": 2048, 00:13:59.748 "data_size": 63488 00:13:59.748 }, 00:13:59.748 { 00:13:59.748 "name": "BaseBdev3", 00:13:59.748 "uuid": "f6f52498-3702-4565-bb65-4c5e270405a4", 00:13:59.748 "is_configured": true, 00:13:59.748 "data_offset": 2048, 00:13:59.748 "data_size": 63488 00:13:59.748 }, 00:13:59.748 { 00:13:59.748 "name": "BaseBdev4", 00:13:59.748 "uuid": "857a68f2-95c2-48b8-8abb-f89b4da912c7", 00:13:59.748 "is_configured": true, 00:13:59.748 "data_offset": 2048, 00:13:59.748 "data_size": 63488 00:13:59.748 } 00:13:59.748 ] 00:13:59.749 }' 00:13:59.749 18:45:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:59.749 18:45:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.008 18:45:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:00.008 18:45:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.008 18:45:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.008 [2024-12-15 18:45:00.418024] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:00.008 18:45:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.008 18:45:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:00.008 18:45:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:00.008 18:45:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:00.008 18:45:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:00.008 18:45:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:00.008 18:45:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:00.008 18:45:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:00.008 18:45:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:00.008 18:45:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:00.008 18:45:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:00.008 18:45:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.008 18:45:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:00.008 18:45:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.008 18:45:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.008 18:45:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.268 18:45:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:00.268 "name": "Existed_Raid", 00:14:00.268 "uuid": "e1ded3aa-0dc9-4cfd-b759-95dc6f978463", 00:14:00.268 "strip_size_kb": 64, 00:14:00.268 "state": "configuring", 00:14:00.268 "raid_level": "raid5f", 00:14:00.268 "superblock": true, 00:14:00.268 "num_base_bdevs": 4, 00:14:00.268 "num_base_bdevs_discovered": 2, 00:14:00.268 "num_base_bdevs_operational": 4, 00:14:00.268 "base_bdevs_list": [ 00:14:00.268 { 00:14:00.268 "name": "BaseBdev1", 00:14:00.268 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:00.268 "is_configured": false, 00:14:00.268 "data_offset": 0, 00:14:00.268 "data_size": 0 00:14:00.268 }, 00:14:00.268 { 00:14:00.268 "name": null, 00:14:00.268 "uuid": "c2a5bcf6-53b0-4d67-8adf-66ebcd9c6125", 00:14:00.268 "is_configured": false, 00:14:00.268 "data_offset": 0, 00:14:00.268 "data_size": 63488 00:14:00.268 }, 00:14:00.268 { 00:14:00.268 "name": "BaseBdev3", 00:14:00.268 "uuid": "f6f52498-3702-4565-bb65-4c5e270405a4", 00:14:00.268 "is_configured": true, 00:14:00.268 "data_offset": 2048, 00:14:00.268 "data_size": 63488 00:14:00.268 }, 00:14:00.268 { 00:14:00.268 "name": "BaseBdev4", 00:14:00.268 "uuid": "857a68f2-95c2-48b8-8abb-f89b4da912c7", 00:14:00.268 "is_configured": true, 00:14:00.268 "data_offset": 2048, 00:14:00.268 "data_size": 63488 00:14:00.268 } 00:14:00.268 ] 00:14:00.268 }' 00:14:00.268 18:45:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:00.268 18:45:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.528 18:45:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.528 18:45:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.528 18:45:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.528 18:45:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:00.528 18:45:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.528 18:45:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:14:00.528 18:45:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:00.528 18:45:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.528 18:45:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.528 BaseBdev1 00:14:00.528 [2024-12-15 18:45:00.900209] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:00.528 18:45:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.528 18:45:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:14:00.528 18:45:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:00.528 18:45:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:00.528 18:45:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:00.528 18:45:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:00.528 18:45:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:00.528 18:45:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:00.528 18:45:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.528 18:45:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.528 18:45:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.528 18:45:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:00.528 18:45:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.528 18:45:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.528 [ 00:14:00.528 { 00:14:00.528 "name": "BaseBdev1", 00:14:00.528 "aliases": [ 00:14:00.528 "e51c5c35-7161-4af5-8232-f7e81b560bdc" 00:14:00.528 ], 00:14:00.528 "product_name": "Malloc disk", 00:14:00.528 "block_size": 512, 00:14:00.528 "num_blocks": 65536, 00:14:00.528 "uuid": "e51c5c35-7161-4af5-8232-f7e81b560bdc", 00:14:00.528 "assigned_rate_limits": { 00:14:00.528 "rw_ios_per_sec": 0, 00:14:00.528 "rw_mbytes_per_sec": 0, 00:14:00.528 "r_mbytes_per_sec": 0, 00:14:00.528 "w_mbytes_per_sec": 0 00:14:00.528 }, 00:14:00.528 "claimed": true, 00:14:00.528 "claim_type": "exclusive_write", 00:14:00.528 "zoned": false, 00:14:00.528 "supported_io_types": { 00:14:00.528 "read": true, 00:14:00.528 "write": true, 00:14:00.528 "unmap": true, 00:14:00.528 "flush": true, 00:14:00.528 "reset": true, 00:14:00.528 "nvme_admin": false, 00:14:00.528 "nvme_io": false, 00:14:00.528 "nvme_io_md": false, 00:14:00.528 "write_zeroes": true, 00:14:00.528 "zcopy": true, 00:14:00.528 "get_zone_info": false, 00:14:00.528 "zone_management": false, 00:14:00.528 "zone_append": false, 00:14:00.528 "compare": false, 00:14:00.528 "compare_and_write": false, 00:14:00.528 "abort": true, 00:14:00.528 "seek_hole": false, 00:14:00.528 "seek_data": false, 00:14:00.528 "copy": true, 00:14:00.528 "nvme_iov_md": false 00:14:00.528 }, 00:14:00.528 "memory_domains": [ 00:14:00.528 { 00:14:00.528 "dma_device_id": "system", 00:14:00.528 "dma_device_type": 1 00:14:00.528 }, 00:14:00.528 { 00:14:00.528 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:00.528 "dma_device_type": 2 00:14:00.528 } 00:14:00.528 ], 00:14:00.528 "driver_specific": {} 00:14:00.528 } 00:14:00.528 ] 00:14:00.528 18:45:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.528 18:45:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:00.528 18:45:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:00.528 18:45:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:00.528 18:45:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:00.528 18:45:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:00.528 18:45:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:00.528 18:45:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:00.528 18:45:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:00.528 18:45:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:00.528 18:45:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:00.528 18:45:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:00.528 18:45:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.528 18:45:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.528 18:45:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.528 18:45:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:00.528 18:45:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.788 18:45:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:00.788 "name": "Existed_Raid", 00:14:00.788 "uuid": "e1ded3aa-0dc9-4cfd-b759-95dc6f978463", 00:14:00.788 "strip_size_kb": 64, 00:14:00.788 "state": "configuring", 00:14:00.788 "raid_level": "raid5f", 00:14:00.788 "superblock": true, 00:14:00.788 "num_base_bdevs": 4, 00:14:00.788 "num_base_bdevs_discovered": 3, 00:14:00.788 "num_base_bdevs_operational": 4, 00:14:00.788 "base_bdevs_list": [ 00:14:00.788 { 00:14:00.788 "name": "BaseBdev1", 00:14:00.788 "uuid": "e51c5c35-7161-4af5-8232-f7e81b560bdc", 00:14:00.788 "is_configured": true, 00:14:00.788 "data_offset": 2048, 00:14:00.788 "data_size": 63488 00:14:00.788 }, 00:14:00.788 { 00:14:00.788 "name": null, 00:14:00.788 "uuid": "c2a5bcf6-53b0-4d67-8adf-66ebcd9c6125", 00:14:00.788 "is_configured": false, 00:14:00.788 "data_offset": 0, 00:14:00.788 "data_size": 63488 00:14:00.788 }, 00:14:00.788 { 00:14:00.788 "name": "BaseBdev3", 00:14:00.788 "uuid": "f6f52498-3702-4565-bb65-4c5e270405a4", 00:14:00.788 "is_configured": true, 00:14:00.788 "data_offset": 2048, 00:14:00.788 "data_size": 63488 00:14:00.788 }, 00:14:00.788 { 00:14:00.788 "name": "BaseBdev4", 00:14:00.788 "uuid": "857a68f2-95c2-48b8-8abb-f89b4da912c7", 00:14:00.788 "is_configured": true, 00:14:00.788 "data_offset": 2048, 00:14:00.788 "data_size": 63488 00:14:00.788 } 00:14:00.788 ] 00:14:00.788 }' 00:14:00.788 18:45:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:00.788 18:45:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.048 18:45:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:01.048 18:45:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:01.048 18:45:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.048 18:45:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.048 18:45:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.048 18:45:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:14:01.048 18:45:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:14:01.048 18:45:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.048 18:45:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.048 [2024-12-15 18:45:01.427336] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:01.048 18:45:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.048 18:45:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:01.048 18:45:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:01.048 18:45:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:01.048 18:45:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:01.048 18:45:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:01.048 18:45:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:01.048 18:45:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:01.048 18:45:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:01.048 18:45:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:01.048 18:45:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:01.048 18:45:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:01.048 18:45:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.048 18:45:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:01.048 18:45:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.048 18:45:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.048 18:45:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:01.048 "name": "Existed_Raid", 00:14:01.048 "uuid": "e1ded3aa-0dc9-4cfd-b759-95dc6f978463", 00:14:01.048 "strip_size_kb": 64, 00:14:01.048 "state": "configuring", 00:14:01.048 "raid_level": "raid5f", 00:14:01.048 "superblock": true, 00:14:01.048 "num_base_bdevs": 4, 00:14:01.048 "num_base_bdevs_discovered": 2, 00:14:01.048 "num_base_bdevs_operational": 4, 00:14:01.048 "base_bdevs_list": [ 00:14:01.048 { 00:14:01.048 "name": "BaseBdev1", 00:14:01.048 "uuid": "e51c5c35-7161-4af5-8232-f7e81b560bdc", 00:14:01.048 "is_configured": true, 00:14:01.048 "data_offset": 2048, 00:14:01.048 "data_size": 63488 00:14:01.048 }, 00:14:01.048 { 00:14:01.048 "name": null, 00:14:01.048 "uuid": "c2a5bcf6-53b0-4d67-8adf-66ebcd9c6125", 00:14:01.048 "is_configured": false, 00:14:01.048 "data_offset": 0, 00:14:01.048 "data_size": 63488 00:14:01.048 }, 00:14:01.048 { 00:14:01.048 "name": null, 00:14:01.048 "uuid": "f6f52498-3702-4565-bb65-4c5e270405a4", 00:14:01.048 "is_configured": false, 00:14:01.048 "data_offset": 0, 00:14:01.048 "data_size": 63488 00:14:01.048 }, 00:14:01.048 { 00:14:01.048 "name": "BaseBdev4", 00:14:01.048 "uuid": "857a68f2-95c2-48b8-8abb-f89b4da912c7", 00:14:01.049 "is_configured": true, 00:14:01.049 "data_offset": 2048, 00:14:01.049 "data_size": 63488 00:14:01.049 } 00:14:01.049 ] 00:14:01.049 }' 00:14:01.049 18:45:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:01.049 18:45:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.619 18:45:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:01.619 18:45:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:01.619 18:45:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.619 18:45:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.619 18:45:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.619 18:45:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:14:01.619 18:45:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:01.619 18:45:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.619 18:45:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.619 [2024-12-15 18:45:01.894594] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:01.619 18:45:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.619 18:45:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:01.619 18:45:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:01.619 18:45:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:01.619 18:45:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:01.619 18:45:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:01.619 18:45:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:01.619 18:45:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:01.619 18:45:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:01.619 18:45:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:01.619 18:45:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:01.619 18:45:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:01.619 18:45:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.619 18:45:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.619 18:45:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:01.619 18:45:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.619 18:45:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:01.619 "name": "Existed_Raid", 00:14:01.619 "uuid": "e1ded3aa-0dc9-4cfd-b759-95dc6f978463", 00:14:01.619 "strip_size_kb": 64, 00:14:01.619 "state": "configuring", 00:14:01.619 "raid_level": "raid5f", 00:14:01.619 "superblock": true, 00:14:01.619 "num_base_bdevs": 4, 00:14:01.619 "num_base_bdevs_discovered": 3, 00:14:01.619 "num_base_bdevs_operational": 4, 00:14:01.619 "base_bdevs_list": [ 00:14:01.619 { 00:14:01.619 "name": "BaseBdev1", 00:14:01.619 "uuid": "e51c5c35-7161-4af5-8232-f7e81b560bdc", 00:14:01.619 "is_configured": true, 00:14:01.619 "data_offset": 2048, 00:14:01.619 "data_size": 63488 00:14:01.619 }, 00:14:01.619 { 00:14:01.619 "name": null, 00:14:01.619 "uuid": "c2a5bcf6-53b0-4d67-8adf-66ebcd9c6125", 00:14:01.619 "is_configured": false, 00:14:01.619 "data_offset": 0, 00:14:01.619 "data_size": 63488 00:14:01.619 }, 00:14:01.619 { 00:14:01.619 "name": "BaseBdev3", 00:14:01.619 "uuid": "f6f52498-3702-4565-bb65-4c5e270405a4", 00:14:01.619 "is_configured": true, 00:14:01.619 "data_offset": 2048, 00:14:01.619 "data_size": 63488 00:14:01.619 }, 00:14:01.619 { 00:14:01.619 "name": "BaseBdev4", 00:14:01.619 "uuid": "857a68f2-95c2-48b8-8abb-f89b4da912c7", 00:14:01.619 "is_configured": true, 00:14:01.619 "data_offset": 2048, 00:14:01.619 "data_size": 63488 00:14:01.619 } 00:14:01.619 ] 00:14:01.619 }' 00:14:01.619 18:45:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:01.619 18:45:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:02.189 18:45:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.189 18:45:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.189 18:45:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:02.189 18:45:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:02.189 18:45:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.189 18:45:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:14:02.189 18:45:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:02.189 18:45:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.189 18:45:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:02.189 [2024-12-15 18:45:02.405745] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:02.189 18:45:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.189 18:45:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:02.189 18:45:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:02.189 18:45:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:02.189 18:45:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:02.189 18:45:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:02.189 18:45:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:02.189 18:45:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:02.189 18:45:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:02.189 18:45:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:02.189 18:45:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:02.189 18:45:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.189 18:45:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:02.189 18:45:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.189 18:45:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:02.189 18:45:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.189 18:45:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:02.189 "name": "Existed_Raid", 00:14:02.189 "uuid": "e1ded3aa-0dc9-4cfd-b759-95dc6f978463", 00:14:02.189 "strip_size_kb": 64, 00:14:02.189 "state": "configuring", 00:14:02.189 "raid_level": "raid5f", 00:14:02.189 "superblock": true, 00:14:02.189 "num_base_bdevs": 4, 00:14:02.189 "num_base_bdevs_discovered": 2, 00:14:02.189 "num_base_bdevs_operational": 4, 00:14:02.189 "base_bdevs_list": [ 00:14:02.189 { 00:14:02.189 "name": null, 00:14:02.189 "uuid": "e51c5c35-7161-4af5-8232-f7e81b560bdc", 00:14:02.189 "is_configured": false, 00:14:02.189 "data_offset": 0, 00:14:02.189 "data_size": 63488 00:14:02.189 }, 00:14:02.189 { 00:14:02.189 "name": null, 00:14:02.189 "uuid": "c2a5bcf6-53b0-4d67-8adf-66ebcd9c6125", 00:14:02.189 "is_configured": false, 00:14:02.189 "data_offset": 0, 00:14:02.189 "data_size": 63488 00:14:02.189 }, 00:14:02.189 { 00:14:02.189 "name": "BaseBdev3", 00:14:02.189 "uuid": "f6f52498-3702-4565-bb65-4c5e270405a4", 00:14:02.189 "is_configured": true, 00:14:02.189 "data_offset": 2048, 00:14:02.189 "data_size": 63488 00:14:02.189 }, 00:14:02.189 { 00:14:02.189 "name": "BaseBdev4", 00:14:02.189 "uuid": "857a68f2-95c2-48b8-8abb-f89b4da912c7", 00:14:02.189 "is_configured": true, 00:14:02.189 "data_offset": 2048, 00:14:02.189 "data_size": 63488 00:14:02.189 } 00:14:02.189 ] 00:14:02.189 }' 00:14:02.189 18:45:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:02.189 18:45:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:02.450 18:45:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.450 18:45:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:02.450 18:45:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.450 18:45:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:02.450 18:45:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.450 18:45:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:14:02.450 18:45:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:02.450 18:45:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.450 18:45:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:02.450 [2024-12-15 18:45:02.823693] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:02.450 18:45:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.450 18:45:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:02.450 18:45:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:02.450 18:45:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:02.450 18:45:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:02.450 18:45:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:02.450 18:45:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:02.450 18:45:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:02.450 18:45:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:02.450 18:45:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:02.450 18:45:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:02.450 18:45:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.450 18:45:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.450 18:45:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:02.450 18:45:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:02.450 18:45:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.450 18:45:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:02.450 "name": "Existed_Raid", 00:14:02.450 "uuid": "e1ded3aa-0dc9-4cfd-b759-95dc6f978463", 00:14:02.450 "strip_size_kb": 64, 00:14:02.450 "state": "configuring", 00:14:02.450 "raid_level": "raid5f", 00:14:02.450 "superblock": true, 00:14:02.450 "num_base_bdevs": 4, 00:14:02.450 "num_base_bdevs_discovered": 3, 00:14:02.450 "num_base_bdevs_operational": 4, 00:14:02.450 "base_bdevs_list": [ 00:14:02.450 { 00:14:02.450 "name": null, 00:14:02.450 "uuid": "e51c5c35-7161-4af5-8232-f7e81b560bdc", 00:14:02.450 "is_configured": false, 00:14:02.450 "data_offset": 0, 00:14:02.450 "data_size": 63488 00:14:02.450 }, 00:14:02.450 { 00:14:02.450 "name": "BaseBdev2", 00:14:02.450 "uuid": "c2a5bcf6-53b0-4d67-8adf-66ebcd9c6125", 00:14:02.450 "is_configured": true, 00:14:02.450 "data_offset": 2048, 00:14:02.450 "data_size": 63488 00:14:02.450 }, 00:14:02.450 { 00:14:02.450 "name": "BaseBdev3", 00:14:02.450 "uuid": "f6f52498-3702-4565-bb65-4c5e270405a4", 00:14:02.450 "is_configured": true, 00:14:02.450 "data_offset": 2048, 00:14:02.450 "data_size": 63488 00:14:02.450 }, 00:14:02.450 { 00:14:02.450 "name": "BaseBdev4", 00:14:02.450 "uuid": "857a68f2-95c2-48b8-8abb-f89b4da912c7", 00:14:02.450 "is_configured": true, 00:14:02.450 "data_offset": 2048, 00:14:02.450 "data_size": 63488 00:14:02.450 } 00:14:02.450 ] 00:14:02.450 }' 00:14:02.450 18:45:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:02.450 18:45:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.020 18:45:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:03.020 18:45:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:03.020 18:45:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.020 18:45:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.021 18:45:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.021 18:45:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:14:03.021 18:45:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:03.021 18:45:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:03.021 18:45:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.021 18:45:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.021 18:45:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.021 18:45:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u e51c5c35-7161-4af5-8232-f7e81b560bdc 00:14:03.021 18:45:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.021 18:45:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.021 [2024-12-15 18:45:03.305796] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:03.021 [2024-12-15 18:45:03.306113] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:14:03.021 [2024-12-15 18:45:03.306171] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:03.021 [2024-12-15 18:45:03.306469] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:03.021 NewBaseBdev 00:14:03.021 [2024-12-15 18:45:03.306945] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:14:03.021 [2024-12-15 18:45:03.307011] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:14:03.021 [2024-12-15 18:45:03.307145] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:03.021 18:45:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.021 18:45:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:14:03.021 18:45:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:14:03.021 18:45:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:03.021 18:45:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:03.021 18:45:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:03.021 18:45:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:03.021 18:45:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:03.021 18:45:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.021 18:45:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.021 18:45:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.021 18:45:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:03.021 18:45:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.021 18:45:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.021 [ 00:14:03.021 { 00:14:03.021 "name": "NewBaseBdev", 00:14:03.021 "aliases": [ 00:14:03.021 "e51c5c35-7161-4af5-8232-f7e81b560bdc" 00:14:03.021 ], 00:14:03.021 "product_name": "Malloc disk", 00:14:03.021 "block_size": 512, 00:14:03.021 "num_blocks": 65536, 00:14:03.021 "uuid": "e51c5c35-7161-4af5-8232-f7e81b560bdc", 00:14:03.021 "assigned_rate_limits": { 00:14:03.021 "rw_ios_per_sec": 0, 00:14:03.021 "rw_mbytes_per_sec": 0, 00:14:03.021 "r_mbytes_per_sec": 0, 00:14:03.021 "w_mbytes_per_sec": 0 00:14:03.021 }, 00:14:03.021 "claimed": true, 00:14:03.021 "claim_type": "exclusive_write", 00:14:03.021 "zoned": false, 00:14:03.021 "supported_io_types": { 00:14:03.021 "read": true, 00:14:03.021 "write": true, 00:14:03.021 "unmap": true, 00:14:03.021 "flush": true, 00:14:03.021 "reset": true, 00:14:03.021 "nvme_admin": false, 00:14:03.021 "nvme_io": false, 00:14:03.021 "nvme_io_md": false, 00:14:03.021 "write_zeroes": true, 00:14:03.021 "zcopy": true, 00:14:03.021 "get_zone_info": false, 00:14:03.021 "zone_management": false, 00:14:03.021 "zone_append": false, 00:14:03.021 "compare": false, 00:14:03.021 "compare_and_write": false, 00:14:03.021 "abort": true, 00:14:03.021 "seek_hole": false, 00:14:03.021 "seek_data": false, 00:14:03.021 "copy": true, 00:14:03.021 "nvme_iov_md": false 00:14:03.021 }, 00:14:03.021 "memory_domains": [ 00:14:03.021 { 00:14:03.021 "dma_device_id": "system", 00:14:03.021 "dma_device_type": 1 00:14:03.021 }, 00:14:03.021 { 00:14:03.021 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:03.021 "dma_device_type": 2 00:14:03.021 } 00:14:03.021 ], 00:14:03.021 "driver_specific": {} 00:14:03.021 } 00:14:03.021 ] 00:14:03.021 18:45:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.021 18:45:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:03.021 18:45:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:14:03.021 18:45:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:03.021 18:45:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:03.021 18:45:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:03.021 18:45:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:03.021 18:45:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:03.021 18:45:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:03.021 18:45:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:03.021 18:45:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:03.021 18:45:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:03.021 18:45:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:03.021 18:45:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:03.021 18:45:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.021 18:45:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.021 18:45:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.021 18:45:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:03.021 "name": "Existed_Raid", 00:14:03.021 "uuid": "e1ded3aa-0dc9-4cfd-b759-95dc6f978463", 00:14:03.021 "strip_size_kb": 64, 00:14:03.021 "state": "online", 00:14:03.021 "raid_level": "raid5f", 00:14:03.021 "superblock": true, 00:14:03.021 "num_base_bdevs": 4, 00:14:03.021 "num_base_bdevs_discovered": 4, 00:14:03.021 "num_base_bdevs_operational": 4, 00:14:03.021 "base_bdevs_list": [ 00:14:03.021 { 00:14:03.021 "name": "NewBaseBdev", 00:14:03.021 "uuid": "e51c5c35-7161-4af5-8232-f7e81b560bdc", 00:14:03.021 "is_configured": true, 00:14:03.021 "data_offset": 2048, 00:14:03.021 "data_size": 63488 00:14:03.021 }, 00:14:03.021 { 00:14:03.021 "name": "BaseBdev2", 00:14:03.021 "uuid": "c2a5bcf6-53b0-4d67-8adf-66ebcd9c6125", 00:14:03.021 "is_configured": true, 00:14:03.021 "data_offset": 2048, 00:14:03.021 "data_size": 63488 00:14:03.021 }, 00:14:03.021 { 00:14:03.021 "name": "BaseBdev3", 00:14:03.021 "uuid": "f6f52498-3702-4565-bb65-4c5e270405a4", 00:14:03.021 "is_configured": true, 00:14:03.021 "data_offset": 2048, 00:14:03.021 "data_size": 63488 00:14:03.021 }, 00:14:03.021 { 00:14:03.021 "name": "BaseBdev4", 00:14:03.021 "uuid": "857a68f2-95c2-48b8-8abb-f89b4da912c7", 00:14:03.021 "is_configured": true, 00:14:03.021 "data_offset": 2048, 00:14:03.021 "data_size": 63488 00:14:03.021 } 00:14:03.021 ] 00:14:03.021 }' 00:14:03.021 18:45:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:03.021 18:45:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.590 18:45:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:14:03.590 18:45:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:03.590 18:45:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:03.590 18:45:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:03.590 18:45:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:14:03.590 18:45:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:03.590 18:45:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:03.590 18:45:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:03.590 18:45:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.590 18:45:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.590 [2024-12-15 18:45:03.757262] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:03.590 18:45:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.590 18:45:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:03.590 "name": "Existed_Raid", 00:14:03.590 "aliases": [ 00:14:03.590 "e1ded3aa-0dc9-4cfd-b759-95dc6f978463" 00:14:03.590 ], 00:14:03.590 "product_name": "Raid Volume", 00:14:03.590 "block_size": 512, 00:14:03.590 "num_blocks": 190464, 00:14:03.590 "uuid": "e1ded3aa-0dc9-4cfd-b759-95dc6f978463", 00:14:03.590 "assigned_rate_limits": { 00:14:03.590 "rw_ios_per_sec": 0, 00:14:03.590 "rw_mbytes_per_sec": 0, 00:14:03.590 "r_mbytes_per_sec": 0, 00:14:03.590 "w_mbytes_per_sec": 0 00:14:03.590 }, 00:14:03.590 "claimed": false, 00:14:03.590 "zoned": false, 00:14:03.590 "supported_io_types": { 00:14:03.590 "read": true, 00:14:03.590 "write": true, 00:14:03.590 "unmap": false, 00:14:03.590 "flush": false, 00:14:03.590 "reset": true, 00:14:03.590 "nvme_admin": false, 00:14:03.590 "nvme_io": false, 00:14:03.590 "nvme_io_md": false, 00:14:03.590 "write_zeroes": true, 00:14:03.590 "zcopy": false, 00:14:03.590 "get_zone_info": false, 00:14:03.590 "zone_management": false, 00:14:03.590 "zone_append": false, 00:14:03.590 "compare": false, 00:14:03.590 "compare_and_write": false, 00:14:03.590 "abort": false, 00:14:03.590 "seek_hole": false, 00:14:03.590 "seek_data": false, 00:14:03.590 "copy": false, 00:14:03.590 "nvme_iov_md": false 00:14:03.590 }, 00:14:03.590 "driver_specific": { 00:14:03.590 "raid": { 00:14:03.590 "uuid": "e1ded3aa-0dc9-4cfd-b759-95dc6f978463", 00:14:03.590 "strip_size_kb": 64, 00:14:03.590 "state": "online", 00:14:03.590 "raid_level": "raid5f", 00:14:03.590 "superblock": true, 00:14:03.590 "num_base_bdevs": 4, 00:14:03.590 "num_base_bdevs_discovered": 4, 00:14:03.590 "num_base_bdevs_operational": 4, 00:14:03.590 "base_bdevs_list": [ 00:14:03.590 { 00:14:03.590 "name": "NewBaseBdev", 00:14:03.590 "uuid": "e51c5c35-7161-4af5-8232-f7e81b560bdc", 00:14:03.590 "is_configured": true, 00:14:03.590 "data_offset": 2048, 00:14:03.590 "data_size": 63488 00:14:03.590 }, 00:14:03.590 { 00:14:03.590 "name": "BaseBdev2", 00:14:03.590 "uuid": "c2a5bcf6-53b0-4d67-8adf-66ebcd9c6125", 00:14:03.590 "is_configured": true, 00:14:03.590 "data_offset": 2048, 00:14:03.590 "data_size": 63488 00:14:03.590 }, 00:14:03.590 { 00:14:03.590 "name": "BaseBdev3", 00:14:03.590 "uuid": "f6f52498-3702-4565-bb65-4c5e270405a4", 00:14:03.590 "is_configured": true, 00:14:03.590 "data_offset": 2048, 00:14:03.590 "data_size": 63488 00:14:03.590 }, 00:14:03.590 { 00:14:03.590 "name": "BaseBdev4", 00:14:03.590 "uuid": "857a68f2-95c2-48b8-8abb-f89b4da912c7", 00:14:03.590 "is_configured": true, 00:14:03.590 "data_offset": 2048, 00:14:03.590 "data_size": 63488 00:14:03.590 } 00:14:03.590 ] 00:14:03.590 } 00:14:03.590 } 00:14:03.590 }' 00:14:03.590 18:45:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:03.590 18:45:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:14:03.590 BaseBdev2 00:14:03.590 BaseBdev3 00:14:03.590 BaseBdev4' 00:14:03.590 18:45:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:03.590 18:45:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:03.591 18:45:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:03.591 18:45:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:14:03.591 18:45:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.591 18:45:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.591 18:45:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:03.591 18:45:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.591 18:45:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:03.591 18:45:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:03.591 18:45:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:03.591 18:45:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:03.591 18:45:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:03.591 18:45:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.591 18:45:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.591 18:45:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.591 18:45:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:03.591 18:45:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:03.591 18:45:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:03.591 18:45:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:03.591 18:45:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.591 18:45:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.591 18:45:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:03.591 18:45:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.591 18:45:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:03.591 18:45:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:03.591 18:45:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:03.591 18:45:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:03.591 18:45:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:14:03.591 18:45:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.591 18:45:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.591 18:45:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.591 18:45:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:03.591 18:45:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:03.591 18:45:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:03.591 18:45:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.591 18:45:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.591 [2024-12-15 18:45:04.020742] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:03.591 [2024-12-15 18:45:04.020772] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:03.591 [2024-12-15 18:45:04.020855] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:03.591 [2024-12-15 18:45:04.021110] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:03.591 [2024-12-15 18:45:04.021121] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:14:03.591 18:45:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.591 18:45:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 95824 00:14:03.591 18:45:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 95824 ']' 00:14:03.591 18:45:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 95824 00:14:03.591 18:45:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:14:03.851 18:45:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:03.851 18:45:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 95824 00:14:03.851 killing process with pid 95824 00:14:03.851 18:45:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:03.851 18:45:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:03.851 18:45:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 95824' 00:14:03.851 18:45:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 95824 00:14:03.851 [2024-12-15 18:45:04.065503] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:03.851 18:45:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 95824 00:14:03.851 [2024-12-15 18:45:04.105669] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:04.112 18:45:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:14:04.112 00:14:04.112 real 0m9.221s 00:14:04.112 user 0m15.683s 00:14:04.112 sys 0m1.990s 00:14:04.112 18:45:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:04.112 18:45:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.112 ************************************ 00:14:04.112 END TEST raid5f_state_function_test_sb 00:14:04.112 ************************************ 00:14:04.112 18:45:04 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:14:04.112 18:45:04 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:14:04.112 18:45:04 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:04.112 18:45:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:04.112 ************************************ 00:14:04.112 START TEST raid5f_superblock_test 00:14:04.112 ************************************ 00:14:04.112 18:45:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 4 00:14:04.112 18:45:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:14:04.112 18:45:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:14:04.112 18:45:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:14:04.112 18:45:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:14:04.112 18:45:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:14:04.112 18:45:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:14:04.112 18:45:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:14:04.112 18:45:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:14:04.112 18:45:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:14:04.112 18:45:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:14:04.112 18:45:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:14:04.112 18:45:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:14:04.112 18:45:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:14:04.112 18:45:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:14:04.112 18:45:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:14:04.112 18:45:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:14:04.112 18:45:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=96467 00:14:04.112 18:45:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:14:04.112 18:45:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 96467 00:14:04.112 18:45:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 96467 ']' 00:14:04.113 18:45:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:04.113 18:45:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:04.113 18:45:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:04.113 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:04.113 18:45:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:04.113 18:45:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.113 [2024-12-15 18:45:04.487306] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:14:04.113 [2024-12-15 18:45:04.487451] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96467 ] 00:14:04.372 [2024-12-15 18:45:04.655522] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:04.373 [2024-12-15 18:45:04.679568] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:14:04.373 [2024-12-15 18:45:04.721353] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:04.373 [2024-12-15 18:45:04.721474] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:04.945 18:45:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:04.945 18:45:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:14:04.945 18:45:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:14:04.945 18:45:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:04.945 18:45:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:14:04.945 18:45:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:14:04.945 18:45:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:14:04.945 18:45:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:04.945 18:45:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:04.945 18:45:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:04.945 18:45:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:14:04.945 18:45:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.945 18:45:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.945 malloc1 00:14:04.945 18:45:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.945 18:45:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:04.945 18:45:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.945 18:45:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.945 [2024-12-15 18:45:05.328524] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:04.945 [2024-12-15 18:45:05.328674] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:04.945 [2024-12-15 18:45:05.328731] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:04.945 [2024-12-15 18:45:05.328777] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:04.945 [2024-12-15 18:45:05.330877] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:04.945 [2024-12-15 18:45:05.330948] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:04.945 pt1 00:14:04.945 18:45:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.945 18:45:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:04.945 18:45:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:04.945 18:45:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:14:04.945 18:45:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:14:04.945 18:45:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:14:04.945 18:45:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:04.945 18:45:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:04.945 18:45:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:04.945 18:45:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:14:04.945 18:45:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.945 18:45:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.945 malloc2 00:14:04.945 18:45:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.945 18:45:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:04.945 18:45:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.945 18:45:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.945 [2024-12-15 18:45:05.360945] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:04.945 [2024-12-15 18:45:05.361066] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:04.945 [2024-12-15 18:45:05.361099] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:04.945 [2024-12-15 18:45:05.361128] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:04.945 [2024-12-15 18:45:05.363145] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:04.945 [2024-12-15 18:45:05.363229] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:04.945 pt2 00:14:04.945 18:45:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.945 18:45:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:04.945 18:45:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:04.945 18:45:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:14:04.945 18:45:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:14:04.945 18:45:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:14:04.945 18:45:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:04.945 18:45:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:04.945 18:45:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:04.945 18:45:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:14:04.945 18:45:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.945 18:45:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.945 malloc3 00:14:04.945 18:45:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.945 18:45:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:04.945 18:45:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.945 18:45:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.205 [2024-12-15 18:45:05.389426] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:05.205 [2024-12-15 18:45:05.389529] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:05.205 [2024-12-15 18:45:05.389567] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:05.205 [2024-12-15 18:45:05.389638] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:05.205 [2024-12-15 18:45:05.391641] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:05.205 [2024-12-15 18:45:05.391714] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:05.205 pt3 00:14:05.205 18:45:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.205 18:45:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:05.205 18:45:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:05.205 18:45:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:14:05.205 18:45:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:14:05.205 18:45:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:14:05.205 18:45:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:05.205 18:45:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:05.205 18:45:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:05.205 18:45:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:14:05.205 18:45:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.205 18:45:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.205 malloc4 00:14:05.205 18:45:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.205 18:45:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:14:05.205 18:45:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.205 18:45:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.205 [2024-12-15 18:45:05.441392] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:14:05.205 [2024-12-15 18:45:05.441614] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:05.205 [2024-12-15 18:45:05.441707] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:05.205 [2024-12-15 18:45:05.441864] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:05.205 [2024-12-15 18:45:05.445378] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:05.205 [2024-12-15 18:45:05.445486] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:14:05.205 pt4 00:14:05.205 18:45:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.205 18:45:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:05.205 18:45:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:05.205 18:45:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:14:05.205 18:45:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.205 18:45:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.205 [2024-12-15 18:45:05.453743] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:05.205 [2024-12-15 18:45:05.455771] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:05.205 [2024-12-15 18:45:05.455914] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:05.205 [2024-12-15 18:45:05.456001] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:14:05.205 [2024-12-15 18:45:05.456219] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:14:05.205 [2024-12-15 18:45:05.456274] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:05.205 [2024-12-15 18:45:05.456565] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:14:05.205 [2024-12-15 18:45:05.457138] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:14:05.205 [2024-12-15 18:45:05.457198] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:14:05.205 [2024-12-15 18:45:05.457362] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:05.205 18:45:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.205 18:45:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:14:05.205 18:45:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:05.205 18:45:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:05.205 18:45:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:05.205 18:45:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:05.205 18:45:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:05.205 18:45:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:05.205 18:45:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:05.205 18:45:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:05.205 18:45:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:05.205 18:45:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.205 18:45:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:05.205 18:45:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.205 18:45:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.205 18:45:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.205 18:45:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:05.205 "name": "raid_bdev1", 00:14:05.205 "uuid": "3f53c598-5eae-40ac-88ef-419231cfde38", 00:14:05.205 "strip_size_kb": 64, 00:14:05.205 "state": "online", 00:14:05.205 "raid_level": "raid5f", 00:14:05.205 "superblock": true, 00:14:05.205 "num_base_bdevs": 4, 00:14:05.205 "num_base_bdevs_discovered": 4, 00:14:05.205 "num_base_bdevs_operational": 4, 00:14:05.205 "base_bdevs_list": [ 00:14:05.205 { 00:14:05.205 "name": "pt1", 00:14:05.205 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:05.205 "is_configured": true, 00:14:05.205 "data_offset": 2048, 00:14:05.205 "data_size": 63488 00:14:05.205 }, 00:14:05.205 { 00:14:05.205 "name": "pt2", 00:14:05.205 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:05.205 "is_configured": true, 00:14:05.205 "data_offset": 2048, 00:14:05.205 "data_size": 63488 00:14:05.205 }, 00:14:05.205 { 00:14:05.205 "name": "pt3", 00:14:05.205 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:05.205 "is_configured": true, 00:14:05.205 "data_offset": 2048, 00:14:05.205 "data_size": 63488 00:14:05.205 }, 00:14:05.205 { 00:14:05.205 "name": "pt4", 00:14:05.205 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:05.205 "is_configured": true, 00:14:05.205 "data_offset": 2048, 00:14:05.205 "data_size": 63488 00:14:05.205 } 00:14:05.205 ] 00:14:05.205 }' 00:14:05.205 18:45:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:05.205 18:45:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.775 18:45:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:14:05.775 18:45:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:05.775 18:45:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:05.775 18:45:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:05.775 18:45:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:05.775 18:45:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:05.775 18:45:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:05.775 18:45:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.775 18:45:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.775 18:45:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:05.775 [2024-12-15 18:45:05.934564] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:05.775 18:45:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.775 18:45:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:05.775 "name": "raid_bdev1", 00:14:05.775 "aliases": [ 00:14:05.775 "3f53c598-5eae-40ac-88ef-419231cfde38" 00:14:05.775 ], 00:14:05.775 "product_name": "Raid Volume", 00:14:05.775 "block_size": 512, 00:14:05.775 "num_blocks": 190464, 00:14:05.775 "uuid": "3f53c598-5eae-40ac-88ef-419231cfde38", 00:14:05.775 "assigned_rate_limits": { 00:14:05.775 "rw_ios_per_sec": 0, 00:14:05.775 "rw_mbytes_per_sec": 0, 00:14:05.775 "r_mbytes_per_sec": 0, 00:14:05.775 "w_mbytes_per_sec": 0 00:14:05.775 }, 00:14:05.775 "claimed": false, 00:14:05.775 "zoned": false, 00:14:05.775 "supported_io_types": { 00:14:05.775 "read": true, 00:14:05.775 "write": true, 00:14:05.775 "unmap": false, 00:14:05.775 "flush": false, 00:14:05.775 "reset": true, 00:14:05.775 "nvme_admin": false, 00:14:05.775 "nvme_io": false, 00:14:05.775 "nvme_io_md": false, 00:14:05.775 "write_zeroes": true, 00:14:05.775 "zcopy": false, 00:14:05.775 "get_zone_info": false, 00:14:05.775 "zone_management": false, 00:14:05.775 "zone_append": false, 00:14:05.775 "compare": false, 00:14:05.775 "compare_and_write": false, 00:14:05.775 "abort": false, 00:14:05.775 "seek_hole": false, 00:14:05.775 "seek_data": false, 00:14:05.775 "copy": false, 00:14:05.775 "nvme_iov_md": false 00:14:05.775 }, 00:14:05.775 "driver_specific": { 00:14:05.775 "raid": { 00:14:05.775 "uuid": "3f53c598-5eae-40ac-88ef-419231cfde38", 00:14:05.775 "strip_size_kb": 64, 00:14:05.775 "state": "online", 00:14:05.775 "raid_level": "raid5f", 00:14:05.775 "superblock": true, 00:14:05.775 "num_base_bdevs": 4, 00:14:05.775 "num_base_bdevs_discovered": 4, 00:14:05.775 "num_base_bdevs_operational": 4, 00:14:05.775 "base_bdevs_list": [ 00:14:05.775 { 00:14:05.775 "name": "pt1", 00:14:05.775 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:05.775 "is_configured": true, 00:14:05.775 "data_offset": 2048, 00:14:05.775 "data_size": 63488 00:14:05.775 }, 00:14:05.775 { 00:14:05.775 "name": "pt2", 00:14:05.775 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:05.775 "is_configured": true, 00:14:05.775 "data_offset": 2048, 00:14:05.775 "data_size": 63488 00:14:05.775 }, 00:14:05.775 { 00:14:05.775 "name": "pt3", 00:14:05.775 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:05.775 "is_configured": true, 00:14:05.775 "data_offset": 2048, 00:14:05.775 "data_size": 63488 00:14:05.775 }, 00:14:05.775 { 00:14:05.775 "name": "pt4", 00:14:05.775 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:05.775 "is_configured": true, 00:14:05.775 "data_offset": 2048, 00:14:05.775 "data_size": 63488 00:14:05.775 } 00:14:05.775 ] 00:14:05.775 } 00:14:05.775 } 00:14:05.775 }' 00:14:05.775 18:45:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:05.775 18:45:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:05.775 pt2 00:14:05.775 pt3 00:14:05.775 pt4' 00:14:05.775 18:45:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:05.775 18:45:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:05.775 18:45:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:05.775 18:45:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:05.775 18:45:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.775 18:45:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:05.775 18:45:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.775 18:45:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.775 18:45:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:05.775 18:45:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:05.775 18:45:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:05.775 18:45:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:05.775 18:45:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:05.775 18:45:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.775 18:45:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.775 18:45:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.775 18:45:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:05.775 18:45:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:05.775 18:45:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:05.775 18:45:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:14:05.775 18:45:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.775 18:45:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.775 18:45:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:05.775 18:45:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.775 18:45:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:05.775 18:45:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:05.776 18:45:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:05.776 18:45:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:05.776 18:45:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:14:05.776 18:45:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.776 18:45:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.776 18:45:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.776 18:45:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:05.776 18:45:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:05.776 18:45:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:05.776 18:45:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:14:05.776 18:45:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.776 18:45:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.036 [2024-12-15 18:45:06.218063] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:06.036 18:45:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.036 18:45:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=3f53c598-5eae-40ac-88ef-419231cfde38 00:14:06.036 18:45:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 3f53c598-5eae-40ac-88ef-419231cfde38 ']' 00:14:06.036 18:45:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:06.036 18:45:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.036 18:45:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.036 [2024-12-15 18:45:06.261782] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:06.036 [2024-12-15 18:45:06.261871] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:06.036 [2024-12-15 18:45:06.261995] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:06.036 [2024-12-15 18:45:06.262123] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:06.036 [2024-12-15 18:45:06.262173] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:14:06.036 18:45:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.036 18:45:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.036 18:45:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.036 18:45:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.036 18:45:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:14:06.036 18:45:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.036 18:45:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:14:06.036 18:45:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:14:06.036 18:45:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:06.036 18:45:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:14:06.036 18:45:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.036 18:45:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.036 18:45:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.036 18:45:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:06.036 18:45:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:14:06.036 18:45:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.036 18:45:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.036 18:45:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.036 18:45:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:06.036 18:45:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:14:06.036 18:45:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.036 18:45:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.036 18:45:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.036 18:45:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:06.036 18:45:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:14:06.036 18:45:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.036 18:45:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.036 18:45:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.036 18:45:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:14:06.036 18:45:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:14:06.036 18:45:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.036 18:45:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.036 18:45:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.036 18:45:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:14:06.036 18:45:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:14:06.036 18:45:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:14:06.036 18:45:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:14:06.036 18:45:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:14:06.036 18:45:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:06.036 18:45:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:14:06.036 18:45:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:06.036 18:45:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:14:06.036 18:45:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.036 18:45:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.036 [2024-12-15 18:45:06.425543] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:14:06.036 [2024-12-15 18:45:06.427405] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:14:06.036 [2024-12-15 18:45:06.427515] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:14:06.036 [2024-12-15 18:45:06.427562] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:14:06.036 [2024-12-15 18:45:06.427634] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:14:06.036 [2024-12-15 18:45:06.427712] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:14:06.036 [2024-12-15 18:45:06.427781] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:14:06.036 [2024-12-15 18:45:06.427815] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:14:06.036 [2024-12-15 18:45:06.427830] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:06.036 [2024-12-15 18:45:06.427842] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:14:06.036 request: 00:14:06.037 { 00:14:06.037 "name": "raid_bdev1", 00:14:06.037 "raid_level": "raid5f", 00:14:06.037 "base_bdevs": [ 00:14:06.037 "malloc1", 00:14:06.037 "malloc2", 00:14:06.037 "malloc3", 00:14:06.037 "malloc4" 00:14:06.037 ], 00:14:06.037 "strip_size_kb": 64, 00:14:06.037 "superblock": false, 00:14:06.037 "method": "bdev_raid_create", 00:14:06.037 "req_id": 1 00:14:06.037 } 00:14:06.037 Got JSON-RPC error response 00:14:06.037 response: 00:14:06.037 { 00:14:06.037 "code": -17, 00:14:06.037 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:14:06.037 } 00:14:06.037 18:45:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:14:06.037 18:45:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:14:06.037 18:45:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:06.037 18:45:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:06.037 18:45:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:06.037 18:45:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.037 18:45:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.037 18:45:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.037 18:45:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:14:06.037 18:45:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.297 18:45:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:14:06.297 18:45:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:14:06.297 18:45:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:06.297 18:45:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.297 18:45:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.297 [2024-12-15 18:45:06.493363] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:06.297 [2024-12-15 18:45:06.493411] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:06.297 [2024-12-15 18:45:06.493429] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:14:06.297 [2024-12-15 18:45:06.493438] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:06.297 [2024-12-15 18:45:06.495448] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:06.297 [2024-12-15 18:45:06.495483] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:06.297 [2024-12-15 18:45:06.495548] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:06.297 [2024-12-15 18:45:06.495584] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:06.297 pt1 00:14:06.297 18:45:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.297 18:45:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:14:06.297 18:45:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:06.297 18:45:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:06.297 18:45:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:06.297 18:45:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:06.297 18:45:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:06.297 18:45:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:06.297 18:45:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:06.297 18:45:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:06.297 18:45:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:06.297 18:45:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:06.297 18:45:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.297 18:45:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.297 18:45:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.297 18:45:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.297 18:45:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:06.297 "name": "raid_bdev1", 00:14:06.297 "uuid": "3f53c598-5eae-40ac-88ef-419231cfde38", 00:14:06.297 "strip_size_kb": 64, 00:14:06.297 "state": "configuring", 00:14:06.297 "raid_level": "raid5f", 00:14:06.297 "superblock": true, 00:14:06.297 "num_base_bdevs": 4, 00:14:06.297 "num_base_bdevs_discovered": 1, 00:14:06.297 "num_base_bdevs_operational": 4, 00:14:06.297 "base_bdevs_list": [ 00:14:06.297 { 00:14:06.297 "name": "pt1", 00:14:06.297 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:06.297 "is_configured": true, 00:14:06.297 "data_offset": 2048, 00:14:06.297 "data_size": 63488 00:14:06.297 }, 00:14:06.297 { 00:14:06.297 "name": null, 00:14:06.297 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:06.297 "is_configured": false, 00:14:06.297 "data_offset": 2048, 00:14:06.297 "data_size": 63488 00:14:06.297 }, 00:14:06.297 { 00:14:06.297 "name": null, 00:14:06.297 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:06.297 "is_configured": false, 00:14:06.297 "data_offset": 2048, 00:14:06.297 "data_size": 63488 00:14:06.297 }, 00:14:06.297 { 00:14:06.297 "name": null, 00:14:06.297 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:06.297 "is_configured": false, 00:14:06.297 "data_offset": 2048, 00:14:06.297 "data_size": 63488 00:14:06.297 } 00:14:06.297 ] 00:14:06.297 }' 00:14:06.297 18:45:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:06.297 18:45:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.557 18:45:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:14:06.557 18:45:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:06.557 18:45:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.557 18:45:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.557 [2024-12-15 18:45:06.884793] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:06.557 [2024-12-15 18:45:06.884915] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:06.557 [2024-12-15 18:45:06.884955] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:14:06.557 [2024-12-15 18:45:06.884984] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:06.557 [2024-12-15 18:45:06.885367] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:06.557 [2024-12-15 18:45:06.885424] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:06.557 [2024-12-15 18:45:06.885519] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:06.557 [2024-12-15 18:45:06.885568] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:06.557 pt2 00:14:06.557 18:45:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.557 18:45:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:14:06.557 18:45:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.557 18:45:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.557 [2024-12-15 18:45:06.892795] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:14:06.557 18:45:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.557 18:45:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:14:06.557 18:45:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:06.557 18:45:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:06.557 18:45:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:06.557 18:45:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:06.557 18:45:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:06.557 18:45:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:06.557 18:45:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:06.557 18:45:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:06.557 18:45:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:06.557 18:45:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.557 18:45:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:06.557 18:45:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.557 18:45:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.557 18:45:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.557 18:45:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:06.557 "name": "raid_bdev1", 00:14:06.557 "uuid": "3f53c598-5eae-40ac-88ef-419231cfde38", 00:14:06.557 "strip_size_kb": 64, 00:14:06.557 "state": "configuring", 00:14:06.557 "raid_level": "raid5f", 00:14:06.557 "superblock": true, 00:14:06.557 "num_base_bdevs": 4, 00:14:06.557 "num_base_bdevs_discovered": 1, 00:14:06.557 "num_base_bdevs_operational": 4, 00:14:06.557 "base_bdevs_list": [ 00:14:06.557 { 00:14:06.557 "name": "pt1", 00:14:06.557 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:06.557 "is_configured": true, 00:14:06.557 "data_offset": 2048, 00:14:06.557 "data_size": 63488 00:14:06.557 }, 00:14:06.557 { 00:14:06.557 "name": null, 00:14:06.558 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:06.558 "is_configured": false, 00:14:06.558 "data_offset": 0, 00:14:06.558 "data_size": 63488 00:14:06.558 }, 00:14:06.558 { 00:14:06.558 "name": null, 00:14:06.558 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:06.558 "is_configured": false, 00:14:06.558 "data_offset": 2048, 00:14:06.558 "data_size": 63488 00:14:06.558 }, 00:14:06.558 { 00:14:06.558 "name": null, 00:14:06.558 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:06.558 "is_configured": false, 00:14:06.558 "data_offset": 2048, 00:14:06.558 "data_size": 63488 00:14:06.558 } 00:14:06.558 ] 00:14:06.558 }' 00:14:06.558 18:45:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:06.558 18:45:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.128 18:45:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:14:07.128 18:45:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:07.128 18:45:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:07.128 18:45:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.128 18:45:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.128 [2024-12-15 18:45:07.272216] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:07.128 [2024-12-15 18:45:07.272297] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:07.128 [2024-12-15 18:45:07.272319] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:14:07.128 [2024-12-15 18:45:07.272332] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:07.128 [2024-12-15 18:45:07.272720] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:07.128 [2024-12-15 18:45:07.272739] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:07.128 [2024-12-15 18:45:07.272807] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:07.128 [2024-12-15 18:45:07.272842] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:07.128 pt2 00:14:07.128 18:45:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.128 18:45:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:07.128 18:45:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:07.128 18:45:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:07.128 18:45:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.128 18:45:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.128 [2024-12-15 18:45:07.284182] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:07.128 [2024-12-15 18:45:07.284232] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:07.128 [2024-12-15 18:45:07.284262] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:14:07.128 [2024-12-15 18:45:07.284272] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:07.128 [2024-12-15 18:45:07.284565] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:07.128 [2024-12-15 18:45:07.284583] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:07.128 [2024-12-15 18:45:07.284633] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:07.128 [2024-12-15 18:45:07.284668] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:07.128 pt3 00:14:07.128 18:45:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.128 18:45:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:07.128 18:45:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:07.128 18:45:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:14:07.128 18:45:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.128 18:45:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.128 [2024-12-15 18:45:07.296133] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:14:07.128 [2024-12-15 18:45:07.296180] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:07.128 [2024-12-15 18:45:07.296210] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:14:07.128 [2024-12-15 18:45:07.296219] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:07.128 [2024-12-15 18:45:07.296511] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:07.128 [2024-12-15 18:45:07.296540] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:14:07.128 [2024-12-15 18:45:07.296589] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:14:07.128 [2024-12-15 18:45:07.296607] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:14:07.128 [2024-12-15 18:45:07.296735] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:14:07.128 [2024-12-15 18:45:07.296748] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:07.128 [2024-12-15 18:45:07.296972] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:07.128 [2024-12-15 18:45:07.297417] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:14:07.128 [2024-12-15 18:45:07.297433] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:14:07.128 [2024-12-15 18:45:07.297534] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:07.128 pt4 00:14:07.128 18:45:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.128 18:45:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:07.128 18:45:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:07.128 18:45:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:14:07.129 18:45:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:07.129 18:45:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:07.129 18:45:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:07.129 18:45:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:07.129 18:45:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:07.129 18:45:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:07.129 18:45:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:07.129 18:45:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:07.129 18:45:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:07.129 18:45:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.129 18:45:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.129 18:45:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.129 18:45:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:07.129 18:45:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.129 18:45:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:07.129 "name": "raid_bdev1", 00:14:07.129 "uuid": "3f53c598-5eae-40ac-88ef-419231cfde38", 00:14:07.129 "strip_size_kb": 64, 00:14:07.129 "state": "online", 00:14:07.129 "raid_level": "raid5f", 00:14:07.129 "superblock": true, 00:14:07.129 "num_base_bdevs": 4, 00:14:07.129 "num_base_bdevs_discovered": 4, 00:14:07.129 "num_base_bdevs_operational": 4, 00:14:07.129 "base_bdevs_list": [ 00:14:07.129 { 00:14:07.129 "name": "pt1", 00:14:07.129 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:07.129 "is_configured": true, 00:14:07.129 "data_offset": 2048, 00:14:07.129 "data_size": 63488 00:14:07.129 }, 00:14:07.129 { 00:14:07.129 "name": "pt2", 00:14:07.129 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:07.129 "is_configured": true, 00:14:07.129 "data_offset": 2048, 00:14:07.129 "data_size": 63488 00:14:07.129 }, 00:14:07.129 { 00:14:07.129 "name": "pt3", 00:14:07.129 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:07.129 "is_configured": true, 00:14:07.129 "data_offset": 2048, 00:14:07.129 "data_size": 63488 00:14:07.129 }, 00:14:07.129 { 00:14:07.129 "name": "pt4", 00:14:07.129 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:07.129 "is_configured": true, 00:14:07.129 "data_offset": 2048, 00:14:07.129 "data_size": 63488 00:14:07.129 } 00:14:07.129 ] 00:14:07.129 }' 00:14:07.129 18:45:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:07.129 18:45:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.389 18:45:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:14:07.389 18:45:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:07.389 18:45:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:07.389 18:45:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:07.389 18:45:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:07.389 18:45:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:07.389 18:45:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:07.389 18:45:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:07.389 18:45:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.389 18:45:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.389 [2024-12-15 18:45:07.763530] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:07.389 18:45:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.389 18:45:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:07.389 "name": "raid_bdev1", 00:14:07.389 "aliases": [ 00:14:07.389 "3f53c598-5eae-40ac-88ef-419231cfde38" 00:14:07.389 ], 00:14:07.389 "product_name": "Raid Volume", 00:14:07.389 "block_size": 512, 00:14:07.389 "num_blocks": 190464, 00:14:07.389 "uuid": "3f53c598-5eae-40ac-88ef-419231cfde38", 00:14:07.389 "assigned_rate_limits": { 00:14:07.389 "rw_ios_per_sec": 0, 00:14:07.389 "rw_mbytes_per_sec": 0, 00:14:07.389 "r_mbytes_per_sec": 0, 00:14:07.389 "w_mbytes_per_sec": 0 00:14:07.389 }, 00:14:07.389 "claimed": false, 00:14:07.389 "zoned": false, 00:14:07.389 "supported_io_types": { 00:14:07.389 "read": true, 00:14:07.389 "write": true, 00:14:07.389 "unmap": false, 00:14:07.389 "flush": false, 00:14:07.389 "reset": true, 00:14:07.389 "nvme_admin": false, 00:14:07.389 "nvme_io": false, 00:14:07.389 "nvme_io_md": false, 00:14:07.389 "write_zeroes": true, 00:14:07.389 "zcopy": false, 00:14:07.389 "get_zone_info": false, 00:14:07.389 "zone_management": false, 00:14:07.389 "zone_append": false, 00:14:07.389 "compare": false, 00:14:07.389 "compare_and_write": false, 00:14:07.389 "abort": false, 00:14:07.389 "seek_hole": false, 00:14:07.389 "seek_data": false, 00:14:07.389 "copy": false, 00:14:07.389 "nvme_iov_md": false 00:14:07.389 }, 00:14:07.389 "driver_specific": { 00:14:07.389 "raid": { 00:14:07.389 "uuid": "3f53c598-5eae-40ac-88ef-419231cfde38", 00:14:07.389 "strip_size_kb": 64, 00:14:07.389 "state": "online", 00:14:07.389 "raid_level": "raid5f", 00:14:07.389 "superblock": true, 00:14:07.389 "num_base_bdevs": 4, 00:14:07.389 "num_base_bdevs_discovered": 4, 00:14:07.389 "num_base_bdevs_operational": 4, 00:14:07.389 "base_bdevs_list": [ 00:14:07.389 { 00:14:07.389 "name": "pt1", 00:14:07.389 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:07.389 "is_configured": true, 00:14:07.389 "data_offset": 2048, 00:14:07.389 "data_size": 63488 00:14:07.389 }, 00:14:07.389 { 00:14:07.389 "name": "pt2", 00:14:07.389 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:07.389 "is_configured": true, 00:14:07.389 "data_offset": 2048, 00:14:07.389 "data_size": 63488 00:14:07.389 }, 00:14:07.389 { 00:14:07.389 "name": "pt3", 00:14:07.389 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:07.389 "is_configured": true, 00:14:07.389 "data_offset": 2048, 00:14:07.389 "data_size": 63488 00:14:07.389 }, 00:14:07.389 { 00:14:07.389 "name": "pt4", 00:14:07.389 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:07.389 "is_configured": true, 00:14:07.389 "data_offset": 2048, 00:14:07.389 "data_size": 63488 00:14:07.389 } 00:14:07.389 ] 00:14:07.389 } 00:14:07.389 } 00:14:07.389 }' 00:14:07.389 18:45:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:07.649 18:45:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:07.649 pt2 00:14:07.649 pt3 00:14:07.649 pt4' 00:14:07.649 18:45:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:07.649 18:45:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:07.649 18:45:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:07.649 18:45:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:07.649 18:45:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:07.649 18:45:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.649 18:45:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.650 18:45:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.650 18:45:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:07.650 18:45:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:07.650 18:45:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:07.650 18:45:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:07.650 18:45:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.650 18:45:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.650 18:45:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:07.650 18:45:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.650 18:45:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:07.650 18:45:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:07.650 18:45:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:07.650 18:45:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:14:07.650 18:45:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.650 18:45:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.650 18:45:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:07.650 18:45:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.650 18:45:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:07.650 18:45:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:07.650 18:45:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:07.650 18:45:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:14:07.650 18:45:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:07.650 18:45:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.650 18:45:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.650 18:45:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.910 18:45:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:07.910 18:45:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:07.910 18:45:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:14:07.910 18:45:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:07.910 18:45:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.910 18:45:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.910 [2024-12-15 18:45:08.106982] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:07.910 18:45:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.910 18:45:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 3f53c598-5eae-40ac-88ef-419231cfde38 '!=' 3f53c598-5eae-40ac-88ef-419231cfde38 ']' 00:14:07.910 18:45:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:14:07.910 18:45:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:07.910 18:45:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:14:07.910 18:45:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:14:07.910 18:45:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.910 18:45:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.910 [2024-12-15 18:45:08.134753] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:14:07.910 18:45:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.910 18:45:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:07.910 18:45:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:07.910 18:45:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:07.910 18:45:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:07.910 18:45:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:07.910 18:45:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:07.910 18:45:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:07.910 18:45:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:07.910 18:45:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:07.910 18:45:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:07.910 18:45:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:07.910 18:45:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.910 18:45:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.910 18:45:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.910 18:45:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.910 18:45:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:07.910 "name": "raid_bdev1", 00:14:07.910 "uuid": "3f53c598-5eae-40ac-88ef-419231cfde38", 00:14:07.910 "strip_size_kb": 64, 00:14:07.910 "state": "online", 00:14:07.910 "raid_level": "raid5f", 00:14:07.910 "superblock": true, 00:14:07.910 "num_base_bdevs": 4, 00:14:07.910 "num_base_bdevs_discovered": 3, 00:14:07.910 "num_base_bdevs_operational": 3, 00:14:07.910 "base_bdevs_list": [ 00:14:07.910 { 00:14:07.910 "name": null, 00:14:07.910 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:07.910 "is_configured": false, 00:14:07.910 "data_offset": 0, 00:14:07.910 "data_size": 63488 00:14:07.910 }, 00:14:07.910 { 00:14:07.910 "name": "pt2", 00:14:07.910 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:07.910 "is_configured": true, 00:14:07.910 "data_offset": 2048, 00:14:07.910 "data_size": 63488 00:14:07.910 }, 00:14:07.910 { 00:14:07.910 "name": "pt3", 00:14:07.910 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:07.910 "is_configured": true, 00:14:07.910 "data_offset": 2048, 00:14:07.910 "data_size": 63488 00:14:07.910 }, 00:14:07.910 { 00:14:07.910 "name": "pt4", 00:14:07.910 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:07.910 "is_configured": true, 00:14:07.910 "data_offset": 2048, 00:14:07.910 "data_size": 63488 00:14:07.910 } 00:14:07.910 ] 00:14:07.910 }' 00:14:07.910 18:45:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:07.910 18:45:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.170 18:45:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:08.170 18:45:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.170 18:45:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.170 [2024-12-15 18:45:08.534047] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:08.170 [2024-12-15 18:45:08.534128] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:08.170 [2024-12-15 18:45:08.534247] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:08.170 [2024-12-15 18:45:08.534348] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:08.170 [2024-12-15 18:45:08.534399] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:14:08.170 18:45:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.170 18:45:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:08.170 18:45:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:14:08.170 18:45:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.170 18:45:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.170 18:45:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.170 18:45:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:14:08.170 18:45:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:14:08.170 18:45:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:14:08.171 18:45:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:08.171 18:45:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:14:08.171 18:45:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.171 18:45:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.171 18:45:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.171 18:45:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:14:08.171 18:45:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:08.171 18:45:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:14:08.171 18:45:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.171 18:45:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.431 18:45:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.431 18:45:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:14:08.431 18:45:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:08.431 18:45:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:14:08.431 18:45:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.431 18:45:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.431 18:45:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.431 18:45:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:14:08.431 18:45:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:08.431 18:45:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:14:08.431 18:45:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:14:08.431 18:45:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:08.431 18:45:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.431 18:45:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.431 [2024-12-15 18:45:08.633852] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:08.431 [2024-12-15 18:45:08.633945] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:08.431 [2024-12-15 18:45:08.633980] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:14:08.431 [2024-12-15 18:45:08.633990] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:08.431 [2024-12-15 18:45:08.636064] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:08.431 [2024-12-15 18:45:08.636105] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:08.431 [2024-12-15 18:45:08.636171] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:08.431 [2024-12-15 18:45:08.636205] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:08.431 pt2 00:14:08.431 18:45:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.431 18:45:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:14:08.431 18:45:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:08.431 18:45:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:08.431 18:45:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:08.431 18:45:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:08.431 18:45:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:08.431 18:45:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:08.431 18:45:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:08.431 18:45:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:08.431 18:45:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:08.431 18:45:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:08.431 18:45:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:08.431 18:45:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.431 18:45:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.431 18:45:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.431 18:45:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:08.431 "name": "raid_bdev1", 00:14:08.431 "uuid": "3f53c598-5eae-40ac-88ef-419231cfde38", 00:14:08.431 "strip_size_kb": 64, 00:14:08.431 "state": "configuring", 00:14:08.431 "raid_level": "raid5f", 00:14:08.431 "superblock": true, 00:14:08.431 "num_base_bdevs": 4, 00:14:08.431 "num_base_bdevs_discovered": 1, 00:14:08.431 "num_base_bdevs_operational": 3, 00:14:08.431 "base_bdevs_list": [ 00:14:08.431 { 00:14:08.431 "name": null, 00:14:08.431 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:08.431 "is_configured": false, 00:14:08.431 "data_offset": 2048, 00:14:08.431 "data_size": 63488 00:14:08.431 }, 00:14:08.431 { 00:14:08.431 "name": "pt2", 00:14:08.431 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:08.431 "is_configured": true, 00:14:08.431 "data_offset": 2048, 00:14:08.431 "data_size": 63488 00:14:08.431 }, 00:14:08.431 { 00:14:08.431 "name": null, 00:14:08.431 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:08.431 "is_configured": false, 00:14:08.431 "data_offset": 2048, 00:14:08.431 "data_size": 63488 00:14:08.431 }, 00:14:08.431 { 00:14:08.431 "name": null, 00:14:08.431 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:08.431 "is_configured": false, 00:14:08.431 "data_offset": 2048, 00:14:08.431 "data_size": 63488 00:14:08.431 } 00:14:08.431 ] 00:14:08.431 }' 00:14:08.431 18:45:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:08.431 18:45:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.691 18:45:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:14:08.691 18:45:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:14:08.691 18:45:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:08.691 18:45:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.691 18:45:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.691 [2024-12-15 18:45:09.017187] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:08.691 [2024-12-15 18:45:09.017317] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:08.691 [2024-12-15 18:45:09.017365] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:14:08.691 [2024-12-15 18:45:09.017398] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:08.691 [2024-12-15 18:45:09.017777] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:08.691 [2024-12-15 18:45:09.017844] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:08.691 [2024-12-15 18:45:09.017935] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:08.691 [2024-12-15 18:45:09.017984] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:08.691 pt3 00:14:08.691 18:45:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.691 18:45:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:14:08.691 18:45:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:08.691 18:45:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:08.691 18:45:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:08.691 18:45:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:08.691 18:45:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:08.691 18:45:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:08.691 18:45:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:08.691 18:45:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:08.691 18:45:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:08.691 18:45:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:08.691 18:45:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.691 18:45:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.691 18:45:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:08.691 18:45:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.691 18:45:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:08.691 "name": "raid_bdev1", 00:14:08.691 "uuid": "3f53c598-5eae-40ac-88ef-419231cfde38", 00:14:08.691 "strip_size_kb": 64, 00:14:08.691 "state": "configuring", 00:14:08.691 "raid_level": "raid5f", 00:14:08.691 "superblock": true, 00:14:08.691 "num_base_bdevs": 4, 00:14:08.691 "num_base_bdevs_discovered": 2, 00:14:08.691 "num_base_bdevs_operational": 3, 00:14:08.691 "base_bdevs_list": [ 00:14:08.691 { 00:14:08.691 "name": null, 00:14:08.691 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:08.691 "is_configured": false, 00:14:08.691 "data_offset": 2048, 00:14:08.691 "data_size": 63488 00:14:08.691 }, 00:14:08.691 { 00:14:08.691 "name": "pt2", 00:14:08.691 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:08.691 "is_configured": true, 00:14:08.691 "data_offset": 2048, 00:14:08.691 "data_size": 63488 00:14:08.691 }, 00:14:08.691 { 00:14:08.691 "name": "pt3", 00:14:08.691 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:08.691 "is_configured": true, 00:14:08.691 "data_offset": 2048, 00:14:08.691 "data_size": 63488 00:14:08.691 }, 00:14:08.691 { 00:14:08.691 "name": null, 00:14:08.691 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:08.691 "is_configured": false, 00:14:08.691 "data_offset": 2048, 00:14:08.691 "data_size": 63488 00:14:08.691 } 00:14:08.691 ] 00:14:08.691 }' 00:14:08.691 18:45:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:08.691 18:45:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.262 18:45:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:14:09.262 18:45:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:14:09.262 18:45:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:14:09.262 18:45:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:14:09.262 18:45:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.262 18:45:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.262 [2024-12-15 18:45:09.496413] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:14:09.262 [2024-12-15 18:45:09.496480] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:09.262 [2024-12-15 18:45:09.496500] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:14:09.262 [2024-12-15 18:45:09.496511] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:09.262 [2024-12-15 18:45:09.496937] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:09.262 [2024-12-15 18:45:09.496959] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:14:09.262 [2024-12-15 18:45:09.497031] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:14:09.262 [2024-12-15 18:45:09.497053] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:14:09.262 [2024-12-15 18:45:09.497154] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:14:09.262 [2024-12-15 18:45:09.497171] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:09.262 [2024-12-15 18:45:09.497404] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:14:09.262 [2024-12-15 18:45:09.497932] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:14:09.262 [2024-12-15 18:45:09.498002] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:14:09.262 [2024-12-15 18:45:09.498242] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:09.262 pt4 00:14:09.262 18:45:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.262 18:45:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:09.262 18:45:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:09.262 18:45:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:09.262 18:45:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:09.262 18:45:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:09.262 18:45:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:09.262 18:45:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:09.262 18:45:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:09.262 18:45:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:09.262 18:45:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:09.262 18:45:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:09.262 18:45:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.262 18:45:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.262 18:45:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:09.262 18:45:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.262 18:45:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:09.262 "name": "raid_bdev1", 00:14:09.262 "uuid": "3f53c598-5eae-40ac-88ef-419231cfde38", 00:14:09.262 "strip_size_kb": 64, 00:14:09.262 "state": "online", 00:14:09.262 "raid_level": "raid5f", 00:14:09.262 "superblock": true, 00:14:09.262 "num_base_bdevs": 4, 00:14:09.262 "num_base_bdevs_discovered": 3, 00:14:09.262 "num_base_bdevs_operational": 3, 00:14:09.262 "base_bdevs_list": [ 00:14:09.262 { 00:14:09.262 "name": null, 00:14:09.262 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:09.262 "is_configured": false, 00:14:09.262 "data_offset": 2048, 00:14:09.262 "data_size": 63488 00:14:09.262 }, 00:14:09.262 { 00:14:09.262 "name": "pt2", 00:14:09.262 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:09.262 "is_configured": true, 00:14:09.262 "data_offset": 2048, 00:14:09.262 "data_size": 63488 00:14:09.262 }, 00:14:09.262 { 00:14:09.262 "name": "pt3", 00:14:09.262 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:09.262 "is_configured": true, 00:14:09.262 "data_offset": 2048, 00:14:09.262 "data_size": 63488 00:14:09.262 }, 00:14:09.262 { 00:14:09.262 "name": "pt4", 00:14:09.262 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:09.262 "is_configured": true, 00:14:09.262 "data_offset": 2048, 00:14:09.262 "data_size": 63488 00:14:09.262 } 00:14:09.262 ] 00:14:09.262 }' 00:14:09.262 18:45:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:09.262 18:45:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.522 18:45:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:09.522 18:45:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.522 18:45:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.522 [2024-12-15 18:45:09.883730] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:09.523 [2024-12-15 18:45:09.883828] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:09.523 [2024-12-15 18:45:09.883917] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:09.523 [2024-12-15 18:45:09.884016] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:09.523 [2024-12-15 18:45:09.884063] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:14:09.523 18:45:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.523 18:45:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:09.523 18:45:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:14:09.523 18:45:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.523 18:45:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.523 18:45:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.523 18:45:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:14:09.523 18:45:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:14:09.523 18:45:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:14:09.523 18:45:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:14:09.523 18:45:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:14:09.523 18:45:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.523 18:45:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.523 18:45:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.523 18:45:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:09.523 18:45:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.523 18:45:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.523 [2024-12-15 18:45:09.955600] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:09.523 [2024-12-15 18:45:09.955719] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:09.523 [2024-12-15 18:45:09.955759] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:14:09.523 [2024-12-15 18:45:09.955813] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:09.523 [2024-12-15 18:45:09.958209] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:09.523 [2024-12-15 18:45:09.958299] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:09.523 [2024-12-15 18:45:09.958400] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:09.523 [2024-12-15 18:45:09.958477] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:09.523 [2024-12-15 18:45:09.958630] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:14:09.523 [2024-12-15 18:45:09.958690] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:09.523 [2024-12-15 18:45:09.958715] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state configuring 00:14:09.523 [2024-12-15 18:45:09.958762] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:09.523 [2024-12-15 18:45:09.958894] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:09.783 pt1 00:14:09.783 18:45:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.783 18:45:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:14:09.783 18:45:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:14:09.783 18:45:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:09.783 18:45:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:09.783 18:45:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:09.783 18:45:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:09.783 18:45:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:09.783 18:45:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:09.783 18:45:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:09.783 18:45:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:09.783 18:45:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:09.783 18:45:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:09.783 18:45:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:09.783 18:45:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.783 18:45:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.783 18:45:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.783 18:45:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:09.783 "name": "raid_bdev1", 00:14:09.783 "uuid": "3f53c598-5eae-40ac-88ef-419231cfde38", 00:14:09.783 "strip_size_kb": 64, 00:14:09.783 "state": "configuring", 00:14:09.783 "raid_level": "raid5f", 00:14:09.783 "superblock": true, 00:14:09.783 "num_base_bdevs": 4, 00:14:09.783 "num_base_bdevs_discovered": 2, 00:14:09.783 "num_base_bdevs_operational": 3, 00:14:09.783 "base_bdevs_list": [ 00:14:09.783 { 00:14:09.783 "name": null, 00:14:09.783 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:09.783 "is_configured": false, 00:14:09.783 "data_offset": 2048, 00:14:09.783 "data_size": 63488 00:14:09.783 }, 00:14:09.783 { 00:14:09.783 "name": "pt2", 00:14:09.783 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:09.783 "is_configured": true, 00:14:09.783 "data_offset": 2048, 00:14:09.783 "data_size": 63488 00:14:09.783 }, 00:14:09.783 { 00:14:09.783 "name": "pt3", 00:14:09.783 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:09.783 "is_configured": true, 00:14:09.783 "data_offset": 2048, 00:14:09.783 "data_size": 63488 00:14:09.783 }, 00:14:09.783 { 00:14:09.783 "name": null, 00:14:09.783 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:09.783 "is_configured": false, 00:14:09.783 "data_offset": 2048, 00:14:09.783 "data_size": 63488 00:14:09.783 } 00:14:09.783 ] 00:14:09.783 }' 00:14:09.783 18:45:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:09.783 18:45:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.043 18:45:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:14:10.043 18:45:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:14:10.043 18:45:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.043 18:45:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.043 18:45:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.043 18:45:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:14:10.043 18:45:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:14:10.043 18:45:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.043 18:45:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.043 [2024-12-15 18:45:10.466724] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:14:10.043 [2024-12-15 18:45:10.466862] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:10.043 [2024-12-15 18:45:10.466901] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:14:10.043 [2024-12-15 18:45:10.466935] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:10.043 [2024-12-15 18:45:10.467341] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:10.043 [2024-12-15 18:45:10.467409] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:14:10.043 [2024-12-15 18:45:10.467507] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:14:10.043 [2024-12-15 18:45:10.467559] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:14:10.043 [2024-12-15 18:45:10.467691] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:14:10.043 [2024-12-15 18:45:10.467732] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:10.043 [2024-12-15 18:45:10.467995] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:10.043 [2024-12-15 18:45:10.468583] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:14:10.043 [2024-12-15 18:45:10.468634] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:14:10.043 [2024-12-15 18:45:10.468921] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:10.043 pt4 00:14:10.043 18:45:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.044 18:45:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:10.044 18:45:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:10.044 18:45:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:10.044 18:45:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:10.044 18:45:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:10.044 18:45:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:10.044 18:45:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:10.044 18:45:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:10.044 18:45:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:10.044 18:45:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:10.044 18:45:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.044 18:45:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.044 18:45:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:10.044 18:45:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.303 18:45:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.303 18:45:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:10.303 "name": "raid_bdev1", 00:14:10.303 "uuid": "3f53c598-5eae-40ac-88ef-419231cfde38", 00:14:10.303 "strip_size_kb": 64, 00:14:10.303 "state": "online", 00:14:10.303 "raid_level": "raid5f", 00:14:10.303 "superblock": true, 00:14:10.303 "num_base_bdevs": 4, 00:14:10.303 "num_base_bdevs_discovered": 3, 00:14:10.303 "num_base_bdevs_operational": 3, 00:14:10.303 "base_bdevs_list": [ 00:14:10.303 { 00:14:10.303 "name": null, 00:14:10.303 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:10.303 "is_configured": false, 00:14:10.303 "data_offset": 2048, 00:14:10.303 "data_size": 63488 00:14:10.303 }, 00:14:10.303 { 00:14:10.303 "name": "pt2", 00:14:10.303 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:10.303 "is_configured": true, 00:14:10.303 "data_offset": 2048, 00:14:10.303 "data_size": 63488 00:14:10.303 }, 00:14:10.303 { 00:14:10.303 "name": "pt3", 00:14:10.303 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:10.303 "is_configured": true, 00:14:10.303 "data_offset": 2048, 00:14:10.303 "data_size": 63488 00:14:10.303 }, 00:14:10.303 { 00:14:10.303 "name": "pt4", 00:14:10.303 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:10.303 "is_configured": true, 00:14:10.303 "data_offset": 2048, 00:14:10.303 "data_size": 63488 00:14:10.303 } 00:14:10.303 ] 00:14:10.303 }' 00:14:10.304 18:45:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:10.304 18:45:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.563 18:45:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:14:10.563 18:45:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:14:10.563 18:45:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.563 18:45:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.563 18:45:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.563 18:45:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:14:10.563 18:45:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:14:10.563 18:45:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:10.563 18:45:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.563 18:45:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.563 [2024-12-15 18:45:10.970082] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:10.563 18:45:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.563 18:45:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 3f53c598-5eae-40ac-88ef-419231cfde38 '!=' 3f53c598-5eae-40ac-88ef-419231cfde38 ']' 00:14:10.563 18:45:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 96467 00:14:10.563 18:45:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 96467 ']' 00:14:10.563 18:45:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 96467 00:14:10.563 18:45:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:14:10.834 18:45:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:10.834 18:45:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 96467 00:14:10.834 18:45:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:10.834 killing process with pid 96467 00:14:10.834 18:45:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:10.834 18:45:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 96467' 00:14:10.834 18:45:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 96467 00:14:10.834 [2024-12-15 18:45:11.041873] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:10.834 [2024-12-15 18:45:11.041954] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:10.834 [2024-12-15 18:45:11.042027] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:10.834 [2024-12-15 18:45:11.042037] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:14:10.834 18:45:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 96467 00:14:10.834 [2024-12-15 18:45:11.085612] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:11.111 18:45:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:14:11.111 ************************************ 00:14:11.111 END TEST raid5f_superblock_test 00:14:11.111 ************************************ 00:14:11.111 00:14:11.111 real 0m6.898s 00:14:11.111 user 0m11.532s 00:14:11.111 sys 0m1.527s 00:14:11.111 18:45:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:11.111 18:45:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.111 18:45:11 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:14:11.111 18:45:11 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false true 00:14:11.111 18:45:11 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:14:11.111 18:45:11 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:11.111 18:45:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:11.111 ************************************ 00:14:11.111 START TEST raid5f_rebuild_test 00:14:11.111 ************************************ 00:14:11.111 18:45:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 false false true 00:14:11.111 18:45:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:14:11.111 18:45:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:14:11.111 18:45:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:14:11.111 18:45:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:14:11.111 18:45:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:11.111 18:45:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:11.111 18:45:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:11.111 18:45:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:11.111 18:45:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:11.111 18:45:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:11.111 18:45:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:11.111 18:45:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:11.111 18:45:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:11.111 18:45:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:11.111 18:45:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:11.111 18:45:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:11.111 18:45:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:14:11.111 18:45:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:11.111 18:45:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:11.111 18:45:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:11.111 18:45:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:11.111 18:45:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:11.111 18:45:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:11.111 18:45:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:11.111 18:45:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:11.111 18:45:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:11.111 18:45:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:14:11.111 18:45:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:14:11.111 18:45:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:14:11.111 18:45:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:14:11.111 18:45:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:14:11.111 18:45:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=96930 00:14:11.111 18:45:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:11.111 18:45:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 96930 00:14:11.111 18:45:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 96930 ']' 00:14:11.111 18:45:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:11.111 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:11.111 18:45:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:11.111 18:45:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:11.111 18:45:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:11.111 18:45:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.111 [2024-12-15 18:45:11.477998] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:14:11.111 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:11.111 Zero copy mechanism will not be used. 00:14:11.111 [2024-12-15 18:45:11.478193] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96930 ] 00:14:11.370 [2024-12-15 18:45:11.647635] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:11.371 [2024-12-15 18:45:11.671796] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:14:11.371 [2024-12-15 18:45:11.713653] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:11.371 [2024-12-15 18:45:11.713774] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:11.939 18:45:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:11.939 18:45:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:14:11.939 18:45:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:11.939 18:45:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:11.939 18:45:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.939 18:45:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.939 BaseBdev1_malloc 00:14:11.939 18:45:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.939 18:45:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:11.939 18:45:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.940 18:45:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.940 [2024-12-15 18:45:12.317008] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:11.940 [2024-12-15 18:45:12.317150] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:11.940 [2024-12-15 18:45:12.317201] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:11.940 [2024-12-15 18:45:12.317233] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:11.940 [2024-12-15 18:45:12.319266] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:11.940 [2024-12-15 18:45:12.319353] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:11.940 BaseBdev1 00:14:11.940 18:45:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.940 18:45:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:11.940 18:45:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:11.940 18:45:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.940 18:45:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.940 BaseBdev2_malloc 00:14:11.940 18:45:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.940 18:45:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:11.940 18:45:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.940 18:45:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.940 [2024-12-15 18:45:12.345450] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:11.940 [2024-12-15 18:45:12.345554] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:11.940 [2024-12-15 18:45:12.345591] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:11.940 [2024-12-15 18:45:12.345641] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:11.940 [2024-12-15 18:45:12.347641] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:11.940 [2024-12-15 18:45:12.347727] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:11.940 BaseBdev2 00:14:11.940 18:45:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.940 18:45:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:11.940 18:45:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:11.940 18:45:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.940 18:45:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.940 BaseBdev3_malloc 00:14:11.940 18:45:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.940 18:45:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:11.940 18:45:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.940 18:45:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.940 [2024-12-15 18:45:12.373943] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:11.940 [2024-12-15 18:45:12.373992] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:11.940 [2024-12-15 18:45:12.374030] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:11.940 [2024-12-15 18:45:12.374039] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:11.940 [2024-12-15 18:45:12.376075] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:11.940 [2024-12-15 18:45:12.376158] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:12.200 BaseBdev3 00:14:12.200 18:45:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.200 18:45:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:12.200 18:45:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:12.200 18:45:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.200 18:45:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.200 BaseBdev4_malloc 00:14:12.200 18:45:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.200 18:45:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:14:12.200 18:45:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.200 18:45:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.200 [2024-12-15 18:45:12.420266] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:14:12.200 [2024-12-15 18:45:12.420424] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:12.200 [2024-12-15 18:45:12.420470] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:12.200 [2024-12-15 18:45:12.420485] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:12.200 [2024-12-15 18:45:12.423683] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:12.200 [2024-12-15 18:45:12.423791] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:12.200 BaseBdev4 00:14:12.200 18:45:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.200 18:45:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:12.200 18:45:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.200 18:45:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.200 spare_malloc 00:14:12.200 18:45:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.200 18:45:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:12.200 18:45:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.200 18:45:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.200 spare_delay 00:14:12.200 18:45:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.200 18:45:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:12.200 18:45:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.200 18:45:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.200 [2024-12-15 18:45:12.461234] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:12.200 [2024-12-15 18:45:12.461285] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:12.200 [2024-12-15 18:45:12.461304] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:14:12.200 [2024-12-15 18:45:12.461312] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:12.200 [2024-12-15 18:45:12.463336] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:12.200 [2024-12-15 18:45:12.463372] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:12.200 spare 00:14:12.200 18:45:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.200 18:45:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:14:12.200 18:45:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.200 18:45:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.200 [2024-12-15 18:45:12.473283] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:12.200 [2024-12-15 18:45:12.475103] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:12.200 [2024-12-15 18:45:12.475169] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:12.200 [2024-12-15 18:45:12.475210] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:12.200 [2024-12-15 18:45:12.475296] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:14:12.200 [2024-12-15 18:45:12.475306] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:14:12.200 [2024-12-15 18:45:12.475538] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:12.200 [2024-12-15 18:45:12.476009] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:14:12.200 [2024-12-15 18:45:12.476030] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:14:12.200 [2024-12-15 18:45:12.476151] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:12.200 18:45:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.200 18:45:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:14:12.200 18:45:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:12.200 18:45:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:12.200 18:45:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:12.200 18:45:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:12.200 18:45:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:12.200 18:45:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:12.200 18:45:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:12.200 18:45:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:12.200 18:45:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:12.200 18:45:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:12.200 18:45:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:12.200 18:45:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.200 18:45:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.200 18:45:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.200 18:45:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:12.200 "name": "raid_bdev1", 00:14:12.200 "uuid": "cb1ad01b-3711-44b4-98dd-3e010ecc2a95", 00:14:12.200 "strip_size_kb": 64, 00:14:12.200 "state": "online", 00:14:12.200 "raid_level": "raid5f", 00:14:12.200 "superblock": false, 00:14:12.200 "num_base_bdevs": 4, 00:14:12.200 "num_base_bdevs_discovered": 4, 00:14:12.200 "num_base_bdevs_operational": 4, 00:14:12.200 "base_bdevs_list": [ 00:14:12.200 { 00:14:12.201 "name": "BaseBdev1", 00:14:12.201 "uuid": "fbc11c72-920c-5c94-9ba8-2f364885a5dc", 00:14:12.201 "is_configured": true, 00:14:12.201 "data_offset": 0, 00:14:12.201 "data_size": 65536 00:14:12.201 }, 00:14:12.201 { 00:14:12.201 "name": "BaseBdev2", 00:14:12.201 "uuid": "3d7f2d81-4fa3-5239-9cd8-1d0fadfddac3", 00:14:12.201 "is_configured": true, 00:14:12.201 "data_offset": 0, 00:14:12.201 "data_size": 65536 00:14:12.201 }, 00:14:12.201 { 00:14:12.201 "name": "BaseBdev3", 00:14:12.201 "uuid": "a2415c66-3ff5-5c01-bfa9-4ec07af6c517", 00:14:12.201 "is_configured": true, 00:14:12.201 "data_offset": 0, 00:14:12.201 "data_size": 65536 00:14:12.201 }, 00:14:12.201 { 00:14:12.201 "name": "BaseBdev4", 00:14:12.201 "uuid": "8e5b308a-d2c1-5951-9940-d862d28d91ab", 00:14:12.201 "is_configured": true, 00:14:12.201 "data_offset": 0, 00:14:12.201 "data_size": 65536 00:14:12.201 } 00:14:12.201 ] 00:14:12.201 }' 00:14:12.201 18:45:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:12.201 18:45:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.770 18:45:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:12.770 18:45:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:12.770 18:45:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.770 18:45:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.770 [2024-12-15 18:45:12.945229] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:12.770 18:45:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.770 18:45:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=196608 00:14:12.770 18:45:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:12.770 18:45:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:12.770 18:45:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.770 18:45:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.770 18:45:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.770 18:45:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:14:12.770 18:45:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:14:12.770 18:45:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:14:12.770 18:45:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:14:12.770 18:45:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:14:12.770 18:45:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:12.770 18:45:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:14:12.770 18:45:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:12.770 18:45:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:12.770 18:45:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:12.770 18:45:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:14:12.770 18:45:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:12.770 18:45:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:12.770 18:45:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:14:12.770 [2024-12-15 18:45:13.200769] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:13.031 /dev/nbd0 00:14:13.031 18:45:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:13.031 18:45:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:13.031 18:45:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:13.031 18:45:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:14:13.031 18:45:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:13.031 18:45:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:13.031 18:45:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:13.031 18:45:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:14:13.031 18:45:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:13.031 18:45:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:13.031 18:45:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:13.031 1+0 records in 00:14:13.031 1+0 records out 00:14:13.031 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000608175 s, 6.7 MB/s 00:14:13.031 18:45:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:13.031 18:45:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:14:13.031 18:45:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:13.031 18:45:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:13.031 18:45:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:14:13.031 18:45:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:13.031 18:45:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:13.031 18:45:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:14:13.031 18:45:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:14:13.031 18:45:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 192 00:14:13.031 18:45:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:14:13.291 512+0 records in 00:14:13.291 512+0 records out 00:14:13.291 100663296 bytes (101 MB, 96 MiB) copied, 0.394848 s, 255 MB/s 00:14:13.291 18:45:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:13.291 18:45:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:13.291 18:45:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:13.291 18:45:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:13.291 18:45:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:14:13.291 18:45:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:13.291 18:45:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:13.552 18:45:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:13.552 [2024-12-15 18:45:13.886358] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:13.552 18:45:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:13.552 18:45:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:13.552 18:45:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:13.552 18:45:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:13.552 18:45:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:13.552 18:45:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:13.552 18:45:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:13.552 18:45:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:13.552 18:45:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.552 18:45:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.552 [2024-12-15 18:45:13.904020] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:13.552 18:45:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.552 18:45:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:13.552 18:45:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:13.552 18:45:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:13.552 18:45:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:13.552 18:45:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:13.552 18:45:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:13.552 18:45:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:13.552 18:45:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:13.552 18:45:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:13.552 18:45:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:13.552 18:45:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:13.552 18:45:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:13.552 18:45:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.552 18:45:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.552 18:45:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.552 18:45:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:13.552 "name": "raid_bdev1", 00:14:13.552 "uuid": "cb1ad01b-3711-44b4-98dd-3e010ecc2a95", 00:14:13.552 "strip_size_kb": 64, 00:14:13.552 "state": "online", 00:14:13.552 "raid_level": "raid5f", 00:14:13.552 "superblock": false, 00:14:13.552 "num_base_bdevs": 4, 00:14:13.552 "num_base_bdevs_discovered": 3, 00:14:13.552 "num_base_bdevs_operational": 3, 00:14:13.552 "base_bdevs_list": [ 00:14:13.552 { 00:14:13.552 "name": null, 00:14:13.552 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:13.552 "is_configured": false, 00:14:13.552 "data_offset": 0, 00:14:13.552 "data_size": 65536 00:14:13.552 }, 00:14:13.552 { 00:14:13.552 "name": "BaseBdev2", 00:14:13.552 "uuid": "3d7f2d81-4fa3-5239-9cd8-1d0fadfddac3", 00:14:13.552 "is_configured": true, 00:14:13.552 "data_offset": 0, 00:14:13.552 "data_size": 65536 00:14:13.552 }, 00:14:13.552 { 00:14:13.552 "name": "BaseBdev3", 00:14:13.552 "uuid": "a2415c66-3ff5-5c01-bfa9-4ec07af6c517", 00:14:13.552 "is_configured": true, 00:14:13.552 "data_offset": 0, 00:14:13.552 "data_size": 65536 00:14:13.552 }, 00:14:13.552 { 00:14:13.552 "name": "BaseBdev4", 00:14:13.552 "uuid": "8e5b308a-d2c1-5951-9940-d862d28d91ab", 00:14:13.552 "is_configured": true, 00:14:13.552 "data_offset": 0, 00:14:13.552 "data_size": 65536 00:14:13.552 } 00:14:13.552 ] 00:14:13.552 }' 00:14:13.552 18:45:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:13.552 18:45:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.121 18:45:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:14.121 18:45:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.121 18:45:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.121 [2024-12-15 18:45:14.347281] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:14.121 [2024-12-15 18:45:14.351513] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b5b0 00:14:14.121 18:45:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.121 18:45:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:14.121 [2024-12-15 18:45:14.353741] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:15.060 18:45:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:15.060 18:45:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:15.060 18:45:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:15.060 18:45:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:15.060 18:45:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:15.060 18:45:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:15.060 18:45:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:15.060 18:45:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.060 18:45:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.060 18:45:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.060 18:45:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:15.060 "name": "raid_bdev1", 00:14:15.060 "uuid": "cb1ad01b-3711-44b4-98dd-3e010ecc2a95", 00:14:15.060 "strip_size_kb": 64, 00:14:15.060 "state": "online", 00:14:15.060 "raid_level": "raid5f", 00:14:15.060 "superblock": false, 00:14:15.060 "num_base_bdevs": 4, 00:14:15.060 "num_base_bdevs_discovered": 4, 00:14:15.060 "num_base_bdevs_operational": 4, 00:14:15.060 "process": { 00:14:15.060 "type": "rebuild", 00:14:15.060 "target": "spare", 00:14:15.060 "progress": { 00:14:15.060 "blocks": 19200, 00:14:15.060 "percent": 9 00:14:15.060 } 00:14:15.060 }, 00:14:15.060 "base_bdevs_list": [ 00:14:15.060 { 00:14:15.060 "name": "spare", 00:14:15.060 "uuid": "3bce95a0-1cbc-585e-8f3a-18479abc3874", 00:14:15.060 "is_configured": true, 00:14:15.060 "data_offset": 0, 00:14:15.060 "data_size": 65536 00:14:15.060 }, 00:14:15.060 { 00:14:15.060 "name": "BaseBdev2", 00:14:15.060 "uuid": "3d7f2d81-4fa3-5239-9cd8-1d0fadfddac3", 00:14:15.060 "is_configured": true, 00:14:15.060 "data_offset": 0, 00:14:15.060 "data_size": 65536 00:14:15.060 }, 00:14:15.060 { 00:14:15.060 "name": "BaseBdev3", 00:14:15.060 "uuid": "a2415c66-3ff5-5c01-bfa9-4ec07af6c517", 00:14:15.060 "is_configured": true, 00:14:15.060 "data_offset": 0, 00:14:15.060 "data_size": 65536 00:14:15.060 }, 00:14:15.060 { 00:14:15.060 "name": "BaseBdev4", 00:14:15.060 "uuid": "8e5b308a-d2c1-5951-9940-d862d28d91ab", 00:14:15.060 "is_configured": true, 00:14:15.060 "data_offset": 0, 00:14:15.060 "data_size": 65536 00:14:15.060 } 00:14:15.060 ] 00:14:15.060 }' 00:14:15.060 18:45:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:15.060 18:45:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:15.060 18:45:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:15.060 18:45:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:15.060 18:45:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:15.060 18:45:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.060 18:45:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.060 [2024-12-15 18:45:15.498231] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:15.319 [2024-12-15 18:45:15.559422] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:15.319 [2024-12-15 18:45:15.559480] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:15.319 [2024-12-15 18:45:15.559499] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:15.319 [2024-12-15 18:45:15.559512] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:15.319 18:45:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.319 18:45:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:15.319 18:45:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:15.320 18:45:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:15.320 18:45:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:15.320 18:45:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:15.320 18:45:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:15.320 18:45:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:15.320 18:45:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:15.320 18:45:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:15.320 18:45:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:15.320 18:45:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:15.320 18:45:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:15.320 18:45:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.320 18:45:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.320 18:45:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.320 18:45:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:15.320 "name": "raid_bdev1", 00:14:15.320 "uuid": "cb1ad01b-3711-44b4-98dd-3e010ecc2a95", 00:14:15.320 "strip_size_kb": 64, 00:14:15.320 "state": "online", 00:14:15.320 "raid_level": "raid5f", 00:14:15.320 "superblock": false, 00:14:15.320 "num_base_bdevs": 4, 00:14:15.320 "num_base_bdevs_discovered": 3, 00:14:15.320 "num_base_bdevs_operational": 3, 00:14:15.320 "base_bdevs_list": [ 00:14:15.320 { 00:14:15.320 "name": null, 00:14:15.320 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:15.320 "is_configured": false, 00:14:15.320 "data_offset": 0, 00:14:15.320 "data_size": 65536 00:14:15.320 }, 00:14:15.320 { 00:14:15.320 "name": "BaseBdev2", 00:14:15.320 "uuid": "3d7f2d81-4fa3-5239-9cd8-1d0fadfddac3", 00:14:15.320 "is_configured": true, 00:14:15.320 "data_offset": 0, 00:14:15.320 "data_size": 65536 00:14:15.320 }, 00:14:15.320 { 00:14:15.320 "name": "BaseBdev3", 00:14:15.320 "uuid": "a2415c66-3ff5-5c01-bfa9-4ec07af6c517", 00:14:15.320 "is_configured": true, 00:14:15.320 "data_offset": 0, 00:14:15.320 "data_size": 65536 00:14:15.320 }, 00:14:15.320 { 00:14:15.320 "name": "BaseBdev4", 00:14:15.320 "uuid": "8e5b308a-d2c1-5951-9940-d862d28d91ab", 00:14:15.320 "is_configured": true, 00:14:15.320 "data_offset": 0, 00:14:15.320 "data_size": 65536 00:14:15.320 } 00:14:15.320 ] 00:14:15.320 }' 00:14:15.320 18:45:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:15.320 18:45:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.580 18:45:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:15.580 18:45:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:15.580 18:45:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:15.580 18:45:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:15.580 18:45:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:15.580 18:45:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:15.580 18:45:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:15.580 18:45:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.580 18:45:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.839 18:45:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.839 18:45:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:15.839 "name": "raid_bdev1", 00:14:15.839 "uuid": "cb1ad01b-3711-44b4-98dd-3e010ecc2a95", 00:14:15.839 "strip_size_kb": 64, 00:14:15.839 "state": "online", 00:14:15.839 "raid_level": "raid5f", 00:14:15.839 "superblock": false, 00:14:15.839 "num_base_bdevs": 4, 00:14:15.839 "num_base_bdevs_discovered": 3, 00:14:15.839 "num_base_bdevs_operational": 3, 00:14:15.839 "base_bdevs_list": [ 00:14:15.839 { 00:14:15.839 "name": null, 00:14:15.839 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:15.839 "is_configured": false, 00:14:15.839 "data_offset": 0, 00:14:15.839 "data_size": 65536 00:14:15.839 }, 00:14:15.839 { 00:14:15.839 "name": "BaseBdev2", 00:14:15.839 "uuid": "3d7f2d81-4fa3-5239-9cd8-1d0fadfddac3", 00:14:15.839 "is_configured": true, 00:14:15.839 "data_offset": 0, 00:14:15.839 "data_size": 65536 00:14:15.839 }, 00:14:15.839 { 00:14:15.839 "name": "BaseBdev3", 00:14:15.839 "uuid": "a2415c66-3ff5-5c01-bfa9-4ec07af6c517", 00:14:15.839 "is_configured": true, 00:14:15.839 "data_offset": 0, 00:14:15.839 "data_size": 65536 00:14:15.839 }, 00:14:15.839 { 00:14:15.839 "name": "BaseBdev4", 00:14:15.839 "uuid": "8e5b308a-d2c1-5951-9940-d862d28d91ab", 00:14:15.839 "is_configured": true, 00:14:15.839 "data_offset": 0, 00:14:15.839 "data_size": 65536 00:14:15.839 } 00:14:15.839 ] 00:14:15.840 }' 00:14:15.840 18:45:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:15.840 18:45:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:15.840 18:45:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:15.840 18:45:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:15.840 18:45:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:15.840 18:45:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.840 18:45:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.840 [2024-12-15 18:45:16.144461] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:15.840 [2024-12-15 18:45:16.148582] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b680 00:14:15.840 18:45:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.840 18:45:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:15.840 [2024-12-15 18:45:16.150825] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:16.778 18:45:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:16.778 18:45:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:16.778 18:45:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:16.778 18:45:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:16.778 18:45:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:16.778 18:45:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.778 18:45:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:16.778 18:45:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.778 18:45:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.778 18:45:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.778 18:45:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:16.778 "name": "raid_bdev1", 00:14:16.778 "uuid": "cb1ad01b-3711-44b4-98dd-3e010ecc2a95", 00:14:16.778 "strip_size_kb": 64, 00:14:16.778 "state": "online", 00:14:16.778 "raid_level": "raid5f", 00:14:16.778 "superblock": false, 00:14:16.778 "num_base_bdevs": 4, 00:14:16.778 "num_base_bdevs_discovered": 4, 00:14:16.778 "num_base_bdevs_operational": 4, 00:14:16.778 "process": { 00:14:16.778 "type": "rebuild", 00:14:16.778 "target": "spare", 00:14:16.778 "progress": { 00:14:16.778 "blocks": 19200, 00:14:16.778 "percent": 9 00:14:16.778 } 00:14:16.778 }, 00:14:16.778 "base_bdevs_list": [ 00:14:16.778 { 00:14:16.778 "name": "spare", 00:14:16.779 "uuid": "3bce95a0-1cbc-585e-8f3a-18479abc3874", 00:14:16.779 "is_configured": true, 00:14:16.779 "data_offset": 0, 00:14:16.779 "data_size": 65536 00:14:16.779 }, 00:14:16.779 { 00:14:16.779 "name": "BaseBdev2", 00:14:16.779 "uuid": "3d7f2d81-4fa3-5239-9cd8-1d0fadfddac3", 00:14:16.779 "is_configured": true, 00:14:16.779 "data_offset": 0, 00:14:16.779 "data_size": 65536 00:14:16.779 }, 00:14:16.779 { 00:14:16.779 "name": "BaseBdev3", 00:14:16.779 "uuid": "a2415c66-3ff5-5c01-bfa9-4ec07af6c517", 00:14:16.779 "is_configured": true, 00:14:16.779 "data_offset": 0, 00:14:16.779 "data_size": 65536 00:14:16.779 }, 00:14:16.779 { 00:14:16.779 "name": "BaseBdev4", 00:14:16.779 "uuid": "8e5b308a-d2c1-5951-9940-d862d28d91ab", 00:14:16.779 "is_configured": true, 00:14:16.779 "data_offset": 0, 00:14:16.779 "data_size": 65536 00:14:16.779 } 00:14:16.779 ] 00:14:16.779 }' 00:14:16.779 18:45:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:17.038 18:45:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:17.038 18:45:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:17.038 18:45:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:17.038 18:45:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:14:17.038 18:45:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:14:17.038 18:45:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:14:17.038 18:45:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=514 00:14:17.038 18:45:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:17.038 18:45:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:17.038 18:45:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:17.038 18:45:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:17.038 18:45:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:17.038 18:45:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:17.038 18:45:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:17.038 18:45:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:17.038 18:45:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.038 18:45:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.038 18:45:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.038 18:45:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:17.038 "name": "raid_bdev1", 00:14:17.038 "uuid": "cb1ad01b-3711-44b4-98dd-3e010ecc2a95", 00:14:17.038 "strip_size_kb": 64, 00:14:17.038 "state": "online", 00:14:17.038 "raid_level": "raid5f", 00:14:17.039 "superblock": false, 00:14:17.039 "num_base_bdevs": 4, 00:14:17.039 "num_base_bdevs_discovered": 4, 00:14:17.039 "num_base_bdevs_operational": 4, 00:14:17.039 "process": { 00:14:17.039 "type": "rebuild", 00:14:17.039 "target": "spare", 00:14:17.039 "progress": { 00:14:17.039 "blocks": 21120, 00:14:17.039 "percent": 10 00:14:17.039 } 00:14:17.039 }, 00:14:17.039 "base_bdevs_list": [ 00:14:17.039 { 00:14:17.039 "name": "spare", 00:14:17.039 "uuid": "3bce95a0-1cbc-585e-8f3a-18479abc3874", 00:14:17.039 "is_configured": true, 00:14:17.039 "data_offset": 0, 00:14:17.039 "data_size": 65536 00:14:17.039 }, 00:14:17.039 { 00:14:17.039 "name": "BaseBdev2", 00:14:17.039 "uuid": "3d7f2d81-4fa3-5239-9cd8-1d0fadfddac3", 00:14:17.039 "is_configured": true, 00:14:17.039 "data_offset": 0, 00:14:17.039 "data_size": 65536 00:14:17.039 }, 00:14:17.039 { 00:14:17.039 "name": "BaseBdev3", 00:14:17.039 "uuid": "a2415c66-3ff5-5c01-bfa9-4ec07af6c517", 00:14:17.039 "is_configured": true, 00:14:17.039 "data_offset": 0, 00:14:17.039 "data_size": 65536 00:14:17.039 }, 00:14:17.039 { 00:14:17.039 "name": "BaseBdev4", 00:14:17.039 "uuid": "8e5b308a-d2c1-5951-9940-d862d28d91ab", 00:14:17.039 "is_configured": true, 00:14:17.039 "data_offset": 0, 00:14:17.039 "data_size": 65536 00:14:17.039 } 00:14:17.039 ] 00:14:17.039 }' 00:14:17.039 18:45:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:17.039 18:45:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:17.039 18:45:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:17.039 18:45:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:17.039 18:45:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:18.420 18:45:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:18.420 18:45:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:18.420 18:45:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:18.420 18:45:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:18.420 18:45:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:18.420 18:45:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:18.421 18:45:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.421 18:45:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:18.421 18:45:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.421 18:45:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.421 18:45:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.421 18:45:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:18.421 "name": "raid_bdev1", 00:14:18.421 "uuid": "cb1ad01b-3711-44b4-98dd-3e010ecc2a95", 00:14:18.421 "strip_size_kb": 64, 00:14:18.421 "state": "online", 00:14:18.421 "raid_level": "raid5f", 00:14:18.421 "superblock": false, 00:14:18.421 "num_base_bdevs": 4, 00:14:18.421 "num_base_bdevs_discovered": 4, 00:14:18.421 "num_base_bdevs_operational": 4, 00:14:18.421 "process": { 00:14:18.421 "type": "rebuild", 00:14:18.421 "target": "spare", 00:14:18.421 "progress": { 00:14:18.421 "blocks": 42240, 00:14:18.421 "percent": 21 00:14:18.421 } 00:14:18.421 }, 00:14:18.421 "base_bdevs_list": [ 00:14:18.421 { 00:14:18.421 "name": "spare", 00:14:18.421 "uuid": "3bce95a0-1cbc-585e-8f3a-18479abc3874", 00:14:18.421 "is_configured": true, 00:14:18.421 "data_offset": 0, 00:14:18.421 "data_size": 65536 00:14:18.421 }, 00:14:18.421 { 00:14:18.421 "name": "BaseBdev2", 00:14:18.421 "uuid": "3d7f2d81-4fa3-5239-9cd8-1d0fadfddac3", 00:14:18.421 "is_configured": true, 00:14:18.421 "data_offset": 0, 00:14:18.421 "data_size": 65536 00:14:18.421 }, 00:14:18.421 { 00:14:18.421 "name": "BaseBdev3", 00:14:18.421 "uuid": "a2415c66-3ff5-5c01-bfa9-4ec07af6c517", 00:14:18.421 "is_configured": true, 00:14:18.421 "data_offset": 0, 00:14:18.421 "data_size": 65536 00:14:18.421 }, 00:14:18.421 { 00:14:18.421 "name": "BaseBdev4", 00:14:18.421 "uuid": "8e5b308a-d2c1-5951-9940-d862d28d91ab", 00:14:18.421 "is_configured": true, 00:14:18.421 "data_offset": 0, 00:14:18.421 "data_size": 65536 00:14:18.421 } 00:14:18.421 ] 00:14:18.421 }' 00:14:18.421 18:45:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:18.421 18:45:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:18.421 18:45:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:18.421 18:45:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:18.421 18:45:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:19.359 18:45:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:19.359 18:45:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:19.359 18:45:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:19.359 18:45:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:19.359 18:45:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:19.359 18:45:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:19.359 18:45:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:19.359 18:45:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:19.359 18:45:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.359 18:45:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.359 18:45:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.359 18:45:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:19.359 "name": "raid_bdev1", 00:14:19.359 "uuid": "cb1ad01b-3711-44b4-98dd-3e010ecc2a95", 00:14:19.359 "strip_size_kb": 64, 00:14:19.359 "state": "online", 00:14:19.359 "raid_level": "raid5f", 00:14:19.359 "superblock": false, 00:14:19.359 "num_base_bdevs": 4, 00:14:19.359 "num_base_bdevs_discovered": 4, 00:14:19.359 "num_base_bdevs_operational": 4, 00:14:19.359 "process": { 00:14:19.359 "type": "rebuild", 00:14:19.359 "target": "spare", 00:14:19.359 "progress": { 00:14:19.359 "blocks": 65280, 00:14:19.359 "percent": 33 00:14:19.359 } 00:14:19.359 }, 00:14:19.359 "base_bdevs_list": [ 00:14:19.359 { 00:14:19.359 "name": "spare", 00:14:19.359 "uuid": "3bce95a0-1cbc-585e-8f3a-18479abc3874", 00:14:19.359 "is_configured": true, 00:14:19.359 "data_offset": 0, 00:14:19.359 "data_size": 65536 00:14:19.359 }, 00:14:19.359 { 00:14:19.359 "name": "BaseBdev2", 00:14:19.359 "uuid": "3d7f2d81-4fa3-5239-9cd8-1d0fadfddac3", 00:14:19.359 "is_configured": true, 00:14:19.359 "data_offset": 0, 00:14:19.359 "data_size": 65536 00:14:19.359 }, 00:14:19.359 { 00:14:19.359 "name": "BaseBdev3", 00:14:19.359 "uuid": "a2415c66-3ff5-5c01-bfa9-4ec07af6c517", 00:14:19.359 "is_configured": true, 00:14:19.359 "data_offset": 0, 00:14:19.359 "data_size": 65536 00:14:19.359 }, 00:14:19.359 { 00:14:19.359 "name": "BaseBdev4", 00:14:19.359 "uuid": "8e5b308a-d2c1-5951-9940-d862d28d91ab", 00:14:19.359 "is_configured": true, 00:14:19.359 "data_offset": 0, 00:14:19.359 "data_size": 65536 00:14:19.359 } 00:14:19.359 ] 00:14:19.359 }' 00:14:19.359 18:45:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:19.359 18:45:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:19.359 18:45:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:19.359 18:45:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:19.359 18:45:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:20.297 18:45:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:20.297 18:45:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:20.297 18:45:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:20.297 18:45:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:20.297 18:45:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:20.297 18:45:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:20.297 18:45:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.297 18:45:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:20.297 18:45:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.297 18:45:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.557 18:45:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.557 18:45:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:20.557 "name": "raid_bdev1", 00:14:20.557 "uuid": "cb1ad01b-3711-44b4-98dd-3e010ecc2a95", 00:14:20.557 "strip_size_kb": 64, 00:14:20.557 "state": "online", 00:14:20.557 "raid_level": "raid5f", 00:14:20.557 "superblock": false, 00:14:20.557 "num_base_bdevs": 4, 00:14:20.557 "num_base_bdevs_discovered": 4, 00:14:20.557 "num_base_bdevs_operational": 4, 00:14:20.557 "process": { 00:14:20.557 "type": "rebuild", 00:14:20.557 "target": "spare", 00:14:20.557 "progress": { 00:14:20.557 "blocks": 86400, 00:14:20.557 "percent": 43 00:14:20.557 } 00:14:20.557 }, 00:14:20.557 "base_bdevs_list": [ 00:14:20.557 { 00:14:20.557 "name": "spare", 00:14:20.557 "uuid": "3bce95a0-1cbc-585e-8f3a-18479abc3874", 00:14:20.557 "is_configured": true, 00:14:20.557 "data_offset": 0, 00:14:20.557 "data_size": 65536 00:14:20.557 }, 00:14:20.557 { 00:14:20.557 "name": "BaseBdev2", 00:14:20.557 "uuid": "3d7f2d81-4fa3-5239-9cd8-1d0fadfddac3", 00:14:20.557 "is_configured": true, 00:14:20.557 "data_offset": 0, 00:14:20.557 "data_size": 65536 00:14:20.557 }, 00:14:20.557 { 00:14:20.557 "name": "BaseBdev3", 00:14:20.557 "uuid": "a2415c66-3ff5-5c01-bfa9-4ec07af6c517", 00:14:20.557 "is_configured": true, 00:14:20.557 "data_offset": 0, 00:14:20.557 "data_size": 65536 00:14:20.557 }, 00:14:20.557 { 00:14:20.557 "name": "BaseBdev4", 00:14:20.557 "uuid": "8e5b308a-d2c1-5951-9940-d862d28d91ab", 00:14:20.557 "is_configured": true, 00:14:20.557 "data_offset": 0, 00:14:20.557 "data_size": 65536 00:14:20.557 } 00:14:20.557 ] 00:14:20.557 }' 00:14:20.557 18:45:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:20.557 18:45:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:20.557 18:45:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:20.557 18:45:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:20.557 18:45:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:21.496 18:45:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:21.496 18:45:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:21.496 18:45:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:21.496 18:45:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:21.496 18:45:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:21.496 18:45:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:21.496 18:45:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:21.496 18:45:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:21.496 18:45:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.496 18:45:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.496 18:45:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.496 18:45:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:21.496 "name": "raid_bdev1", 00:14:21.496 "uuid": "cb1ad01b-3711-44b4-98dd-3e010ecc2a95", 00:14:21.496 "strip_size_kb": 64, 00:14:21.496 "state": "online", 00:14:21.496 "raid_level": "raid5f", 00:14:21.496 "superblock": false, 00:14:21.496 "num_base_bdevs": 4, 00:14:21.496 "num_base_bdevs_discovered": 4, 00:14:21.496 "num_base_bdevs_operational": 4, 00:14:21.496 "process": { 00:14:21.496 "type": "rebuild", 00:14:21.496 "target": "spare", 00:14:21.496 "progress": { 00:14:21.496 "blocks": 107520, 00:14:21.496 "percent": 54 00:14:21.496 } 00:14:21.496 }, 00:14:21.496 "base_bdevs_list": [ 00:14:21.496 { 00:14:21.496 "name": "spare", 00:14:21.496 "uuid": "3bce95a0-1cbc-585e-8f3a-18479abc3874", 00:14:21.496 "is_configured": true, 00:14:21.496 "data_offset": 0, 00:14:21.496 "data_size": 65536 00:14:21.496 }, 00:14:21.496 { 00:14:21.496 "name": "BaseBdev2", 00:14:21.496 "uuid": "3d7f2d81-4fa3-5239-9cd8-1d0fadfddac3", 00:14:21.496 "is_configured": true, 00:14:21.496 "data_offset": 0, 00:14:21.496 "data_size": 65536 00:14:21.496 }, 00:14:21.496 { 00:14:21.496 "name": "BaseBdev3", 00:14:21.496 "uuid": "a2415c66-3ff5-5c01-bfa9-4ec07af6c517", 00:14:21.496 "is_configured": true, 00:14:21.496 "data_offset": 0, 00:14:21.496 "data_size": 65536 00:14:21.496 }, 00:14:21.496 { 00:14:21.496 "name": "BaseBdev4", 00:14:21.496 "uuid": "8e5b308a-d2c1-5951-9940-d862d28d91ab", 00:14:21.496 "is_configured": true, 00:14:21.496 "data_offset": 0, 00:14:21.496 "data_size": 65536 00:14:21.496 } 00:14:21.496 ] 00:14:21.496 }' 00:14:21.496 18:45:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:21.756 18:45:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:21.756 18:45:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:21.756 18:45:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:21.756 18:45:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:22.695 18:45:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:22.695 18:45:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:22.695 18:45:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:22.695 18:45:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:22.695 18:45:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:22.695 18:45:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:22.695 18:45:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.695 18:45:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:22.695 18:45:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.695 18:45:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.695 18:45:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.695 18:45:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:22.695 "name": "raid_bdev1", 00:14:22.695 "uuid": "cb1ad01b-3711-44b4-98dd-3e010ecc2a95", 00:14:22.695 "strip_size_kb": 64, 00:14:22.695 "state": "online", 00:14:22.695 "raid_level": "raid5f", 00:14:22.695 "superblock": false, 00:14:22.695 "num_base_bdevs": 4, 00:14:22.695 "num_base_bdevs_discovered": 4, 00:14:22.695 "num_base_bdevs_operational": 4, 00:14:22.695 "process": { 00:14:22.695 "type": "rebuild", 00:14:22.695 "target": "spare", 00:14:22.695 "progress": { 00:14:22.695 "blocks": 130560, 00:14:22.695 "percent": 66 00:14:22.695 } 00:14:22.695 }, 00:14:22.695 "base_bdevs_list": [ 00:14:22.695 { 00:14:22.695 "name": "spare", 00:14:22.695 "uuid": "3bce95a0-1cbc-585e-8f3a-18479abc3874", 00:14:22.695 "is_configured": true, 00:14:22.695 "data_offset": 0, 00:14:22.695 "data_size": 65536 00:14:22.695 }, 00:14:22.695 { 00:14:22.695 "name": "BaseBdev2", 00:14:22.695 "uuid": "3d7f2d81-4fa3-5239-9cd8-1d0fadfddac3", 00:14:22.695 "is_configured": true, 00:14:22.695 "data_offset": 0, 00:14:22.695 "data_size": 65536 00:14:22.695 }, 00:14:22.695 { 00:14:22.695 "name": "BaseBdev3", 00:14:22.695 "uuid": "a2415c66-3ff5-5c01-bfa9-4ec07af6c517", 00:14:22.695 "is_configured": true, 00:14:22.695 "data_offset": 0, 00:14:22.695 "data_size": 65536 00:14:22.695 }, 00:14:22.695 { 00:14:22.695 "name": "BaseBdev4", 00:14:22.695 "uuid": "8e5b308a-d2c1-5951-9940-d862d28d91ab", 00:14:22.695 "is_configured": true, 00:14:22.695 "data_offset": 0, 00:14:22.695 "data_size": 65536 00:14:22.695 } 00:14:22.695 ] 00:14:22.695 }' 00:14:22.695 18:45:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:22.695 18:45:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:22.695 18:45:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:22.695 18:45:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:22.695 18:45:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:24.077 18:45:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:24.077 18:45:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:24.077 18:45:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:24.077 18:45:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:24.077 18:45:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:24.077 18:45:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:24.077 18:45:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.077 18:45:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:24.077 18:45:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.077 18:45:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.077 18:45:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.077 18:45:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:24.077 "name": "raid_bdev1", 00:14:24.077 "uuid": "cb1ad01b-3711-44b4-98dd-3e010ecc2a95", 00:14:24.077 "strip_size_kb": 64, 00:14:24.077 "state": "online", 00:14:24.077 "raid_level": "raid5f", 00:14:24.077 "superblock": false, 00:14:24.077 "num_base_bdevs": 4, 00:14:24.077 "num_base_bdevs_discovered": 4, 00:14:24.077 "num_base_bdevs_operational": 4, 00:14:24.077 "process": { 00:14:24.077 "type": "rebuild", 00:14:24.077 "target": "spare", 00:14:24.077 "progress": { 00:14:24.077 "blocks": 151680, 00:14:24.077 "percent": 77 00:14:24.077 } 00:14:24.077 }, 00:14:24.077 "base_bdevs_list": [ 00:14:24.077 { 00:14:24.077 "name": "spare", 00:14:24.077 "uuid": "3bce95a0-1cbc-585e-8f3a-18479abc3874", 00:14:24.077 "is_configured": true, 00:14:24.077 "data_offset": 0, 00:14:24.077 "data_size": 65536 00:14:24.077 }, 00:14:24.077 { 00:14:24.077 "name": "BaseBdev2", 00:14:24.077 "uuid": "3d7f2d81-4fa3-5239-9cd8-1d0fadfddac3", 00:14:24.077 "is_configured": true, 00:14:24.077 "data_offset": 0, 00:14:24.077 "data_size": 65536 00:14:24.077 }, 00:14:24.077 { 00:14:24.077 "name": "BaseBdev3", 00:14:24.077 "uuid": "a2415c66-3ff5-5c01-bfa9-4ec07af6c517", 00:14:24.077 "is_configured": true, 00:14:24.077 "data_offset": 0, 00:14:24.077 "data_size": 65536 00:14:24.077 }, 00:14:24.077 { 00:14:24.077 "name": "BaseBdev4", 00:14:24.077 "uuid": "8e5b308a-d2c1-5951-9940-d862d28d91ab", 00:14:24.077 "is_configured": true, 00:14:24.077 "data_offset": 0, 00:14:24.077 "data_size": 65536 00:14:24.077 } 00:14:24.077 ] 00:14:24.077 }' 00:14:24.077 18:45:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:24.077 18:45:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:24.077 18:45:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:24.077 18:45:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:24.077 18:45:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:25.016 18:45:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:25.016 18:45:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:25.016 18:45:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:25.016 18:45:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:25.016 18:45:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:25.016 18:45:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:25.016 18:45:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.016 18:45:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:25.016 18:45:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.016 18:45:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.016 18:45:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.016 18:45:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:25.016 "name": "raid_bdev1", 00:14:25.016 "uuid": "cb1ad01b-3711-44b4-98dd-3e010ecc2a95", 00:14:25.016 "strip_size_kb": 64, 00:14:25.016 "state": "online", 00:14:25.016 "raid_level": "raid5f", 00:14:25.016 "superblock": false, 00:14:25.016 "num_base_bdevs": 4, 00:14:25.016 "num_base_bdevs_discovered": 4, 00:14:25.016 "num_base_bdevs_operational": 4, 00:14:25.016 "process": { 00:14:25.016 "type": "rebuild", 00:14:25.016 "target": "spare", 00:14:25.016 "progress": { 00:14:25.016 "blocks": 172800, 00:14:25.016 "percent": 87 00:14:25.016 } 00:14:25.016 }, 00:14:25.016 "base_bdevs_list": [ 00:14:25.016 { 00:14:25.016 "name": "spare", 00:14:25.016 "uuid": "3bce95a0-1cbc-585e-8f3a-18479abc3874", 00:14:25.016 "is_configured": true, 00:14:25.016 "data_offset": 0, 00:14:25.016 "data_size": 65536 00:14:25.016 }, 00:14:25.016 { 00:14:25.016 "name": "BaseBdev2", 00:14:25.016 "uuid": "3d7f2d81-4fa3-5239-9cd8-1d0fadfddac3", 00:14:25.016 "is_configured": true, 00:14:25.016 "data_offset": 0, 00:14:25.016 "data_size": 65536 00:14:25.016 }, 00:14:25.016 { 00:14:25.016 "name": "BaseBdev3", 00:14:25.016 "uuid": "a2415c66-3ff5-5c01-bfa9-4ec07af6c517", 00:14:25.017 "is_configured": true, 00:14:25.017 "data_offset": 0, 00:14:25.017 "data_size": 65536 00:14:25.017 }, 00:14:25.017 { 00:14:25.017 "name": "BaseBdev4", 00:14:25.017 "uuid": "8e5b308a-d2c1-5951-9940-d862d28d91ab", 00:14:25.017 "is_configured": true, 00:14:25.017 "data_offset": 0, 00:14:25.017 "data_size": 65536 00:14:25.017 } 00:14:25.017 ] 00:14:25.017 }' 00:14:25.017 18:45:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:25.017 18:45:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:25.017 18:45:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:25.017 18:45:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:25.017 18:45:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:25.968 18:45:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:25.968 18:45:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:25.968 18:45:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:25.968 18:45:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:25.968 18:45:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:25.968 18:45:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:25.968 18:45:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.968 18:45:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.968 18:45:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:25.968 18:45:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.968 18:45:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.237 18:45:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:26.237 "name": "raid_bdev1", 00:14:26.237 "uuid": "cb1ad01b-3711-44b4-98dd-3e010ecc2a95", 00:14:26.237 "strip_size_kb": 64, 00:14:26.237 "state": "online", 00:14:26.237 "raid_level": "raid5f", 00:14:26.237 "superblock": false, 00:14:26.237 "num_base_bdevs": 4, 00:14:26.237 "num_base_bdevs_discovered": 4, 00:14:26.237 "num_base_bdevs_operational": 4, 00:14:26.237 "process": { 00:14:26.237 "type": "rebuild", 00:14:26.237 "target": "spare", 00:14:26.237 "progress": { 00:14:26.237 "blocks": 193920, 00:14:26.237 "percent": 98 00:14:26.237 } 00:14:26.237 }, 00:14:26.237 "base_bdevs_list": [ 00:14:26.237 { 00:14:26.237 "name": "spare", 00:14:26.237 "uuid": "3bce95a0-1cbc-585e-8f3a-18479abc3874", 00:14:26.237 "is_configured": true, 00:14:26.238 "data_offset": 0, 00:14:26.238 "data_size": 65536 00:14:26.238 }, 00:14:26.238 { 00:14:26.238 "name": "BaseBdev2", 00:14:26.238 "uuid": "3d7f2d81-4fa3-5239-9cd8-1d0fadfddac3", 00:14:26.238 "is_configured": true, 00:14:26.238 "data_offset": 0, 00:14:26.238 "data_size": 65536 00:14:26.238 }, 00:14:26.238 { 00:14:26.238 "name": "BaseBdev3", 00:14:26.238 "uuid": "a2415c66-3ff5-5c01-bfa9-4ec07af6c517", 00:14:26.238 "is_configured": true, 00:14:26.238 "data_offset": 0, 00:14:26.238 "data_size": 65536 00:14:26.238 }, 00:14:26.238 { 00:14:26.238 "name": "BaseBdev4", 00:14:26.238 "uuid": "8e5b308a-d2c1-5951-9940-d862d28d91ab", 00:14:26.238 "is_configured": true, 00:14:26.238 "data_offset": 0, 00:14:26.238 "data_size": 65536 00:14:26.238 } 00:14:26.238 ] 00:14:26.238 }' 00:14:26.238 18:45:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:26.238 18:45:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:26.238 18:45:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:26.238 [2024-12-15 18:45:26.494899] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:26.238 [2024-12-15 18:45:26.495037] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:26.238 [2024-12-15 18:45:26.495099] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:26.238 18:45:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:26.238 18:45:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:27.177 18:45:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:27.177 18:45:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:27.177 18:45:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:27.177 18:45:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:27.177 18:45:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:27.177 18:45:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:27.177 18:45:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.177 18:45:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.177 18:45:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:27.177 18:45:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.177 18:45:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.177 18:45:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:27.177 "name": "raid_bdev1", 00:14:27.177 "uuid": "cb1ad01b-3711-44b4-98dd-3e010ecc2a95", 00:14:27.177 "strip_size_kb": 64, 00:14:27.177 "state": "online", 00:14:27.177 "raid_level": "raid5f", 00:14:27.177 "superblock": false, 00:14:27.177 "num_base_bdevs": 4, 00:14:27.177 "num_base_bdevs_discovered": 4, 00:14:27.177 "num_base_bdevs_operational": 4, 00:14:27.177 "base_bdevs_list": [ 00:14:27.177 { 00:14:27.177 "name": "spare", 00:14:27.177 "uuid": "3bce95a0-1cbc-585e-8f3a-18479abc3874", 00:14:27.177 "is_configured": true, 00:14:27.177 "data_offset": 0, 00:14:27.177 "data_size": 65536 00:14:27.177 }, 00:14:27.177 { 00:14:27.177 "name": "BaseBdev2", 00:14:27.177 "uuid": "3d7f2d81-4fa3-5239-9cd8-1d0fadfddac3", 00:14:27.177 "is_configured": true, 00:14:27.177 "data_offset": 0, 00:14:27.177 "data_size": 65536 00:14:27.177 }, 00:14:27.177 { 00:14:27.177 "name": "BaseBdev3", 00:14:27.177 "uuid": "a2415c66-3ff5-5c01-bfa9-4ec07af6c517", 00:14:27.177 "is_configured": true, 00:14:27.177 "data_offset": 0, 00:14:27.177 "data_size": 65536 00:14:27.177 }, 00:14:27.177 { 00:14:27.177 "name": "BaseBdev4", 00:14:27.177 "uuid": "8e5b308a-d2c1-5951-9940-d862d28d91ab", 00:14:27.177 "is_configured": true, 00:14:27.177 "data_offset": 0, 00:14:27.177 "data_size": 65536 00:14:27.177 } 00:14:27.177 ] 00:14:27.177 }' 00:14:27.177 18:45:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:27.178 18:45:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:27.437 18:45:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:27.437 18:45:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:27.437 18:45:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:14:27.437 18:45:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:27.437 18:45:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:27.437 18:45:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:27.437 18:45:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:27.437 18:45:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:27.437 18:45:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.437 18:45:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:27.437 18:45:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.437 18:45:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.437 18:45:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.437 18:45:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:27.438 "name": "raid_bdev1", 00:14:27.438 "uuid": "cb1ad01b-3711-44b4-98dd-3e010ecc2a95", 00:14:27.438 "strip_size_kb": 64, 00:14:27.438 "state": "online", 00:14:27.438 "raid_level": "raid5f", 00:14:27.438 "superblock": false, 00:14:27.438 "num_base_bdevs": 4, 00:14:27.438 "num_base_bdevs_discovered": 4, 00:14:27.438 "num_base_bdevs_operational": 4, 00:14:27.438 "base_bdevs_list": [ 00:14:27.438 { 00:14:27.438 "name": "spare", 00:14:27.438 "uuid": "3bce95a0-1cbc-585e-8f3a-18479abc3874", 00:14:27.438 "is_configured": true, 00:14:27.438 "data_offset": 0, 00:14:27.438 "data_size": 65536 00:14:27.438 }, 00:14:27.438 { 00:14:27.438 "name": "BaseBdev2", 00:14:27.438 "uuid": "3d7f2d81-4fa3-5239-9cd8-1d0fadfddac3", 00:14:27.438 "is_configured": true, 00:14:27.438 "data_offset": 0, 00:14:27.438 "data_size": 65536 00:14:27.438 }, 00:14:27.438 { 00:14:27.438 "name": "BaseBdev3", 00:14:27.438 "uuid": "a2415c66-3ff5-5c01-bfa9-4ec07af6c517", 00:14:27.438 "is_configured": true, 00:14:27.438 "data_offset": 0, 00:14:27.438 "data_size": 65536 00:14:27.438 }, 00:14:27.438 { 00:14:27.438 "name": "BaseBdev4", 00:14:27.438 "uuid": "8e5b308a-d2c1-5951-9940-d862d28d91ab", 00:14:27.438 "is_configured": true, 00:14:27.438 "data_offset": 0, 00:14:27.438 "data_size": 65536 00:14:27.438 } 00:14:27.438 ] 00:14:27.438 }' 00:14:27.438 18:45:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:27.438 18:45:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:27.438 18:45:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:27.438 18:45:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:27.438 18:45:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:14:27.438 18:45:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:27.438 18:45:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:27.438 18:45:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:27.438 18:45:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:27.438 18:45:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:27.438 18:45:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:27.438 18:45:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:27.438 18:45:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:27.438 18:45:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:27.438 18:45:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.438 18:45:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:27.438 18:45:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.438 18:45:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.438 18:45:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.438 18:45:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:27.438 "name": "raid_bdev1", 00:14:27.438 "uuid": "cb1ad01b-3711-44b4-98dd-3e010ecc2a95", 00:14:27.438 "strip_size_kb": 64, 00:14:27.438 "state": "online", 00:14:27.438 "raid_level": "raid5f", 00:14:27.438 "superblock": false, 00:14:27.438 "num_base_bdevs": 4, 00:14:27.438 "num_base_bdevs_discovered": 4, 00:14:27.438 "num_base_bdevs_operational": 4, 00:14:27.438 "base_bdevs_list": [ 00:14:27.438 { 00:14:27.438 "name": "spare", 00:14:27.438 "uuid": "3bce95a0-1cbc-585e-8f3a-18479abc3874", 00:14:27.438 "is_configured": true, 00:14:27.438 "data_offset": 0, 00:14:27.438 "data_size": 65536 00:14:27.438 }, 00:14:27.438 { 00:14:27.438 "name": "BaseBdev2", 00:14:27.438 "uuid": "3d7f2d81-4fa3-5239-9cd8-1d0fadfddac3", 00:14:27.438 "is_configured": true, 00:14:27.438 "data_offset": 0, 00:14:27.438 "data_size": 65536 00:14:27.438 }, 00:14:27.438 { 00:14:27.438 "name": "BaseBdev3", 00:14:27.438 "uuid": "a2415c66-3ff5-5c01-bfa9-4ec07af6c517", 00:14:27.438 "is_configured": true, 00:14:27.438 "data_offset": 0, 00:14:27.438 "data_size": 65536 00:14:27.438 }, 00:14:27.438 { 00:14:27.438 "name": "BaseBdev4", 00:14:27.438 "uuid": "8e5b308a-d2c1-5951-9940-d862d28d91ab", 00:14:27.438 "is_configured": true, 00:14:27.438 "data_offset": 0, 00:14:27.438 "data_size": 65536 00:14:27.438 } 00:14:27.438 ] 00:14:27.438 }' 00:14:27.438 18:45:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:27.438 18:45:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.007 18:45:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:28.007 18:45:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.007 18:45:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.007 [2024-12-15 18:45:28.261604] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:28.007 [2024-12-15 18:45:28.261640] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:28.007 [2024-12-15 18:45:28.261745] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:28.007 [2024-12-15 18:45:28.261851] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:28.007 [2024-12-15 18:45:28.261863] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:14:28.007 18:45:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.007 18:45:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.007 18:45:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.007 18:45:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:14:28.007 18:45:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.007 18:45:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.007 18:45:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:28.007 18:45:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:28.007 18:45:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:14:28.007 18:45:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:14:28.007 18:45:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:28.007 18:45:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:14:28.007 18:45:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:28.007 18:45:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:28.007 18:45:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:28.007 18:45:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:14:28.007 18:45:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:28.007 18:45:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:28.007 18:45:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:14:28.267 /dev/nbd0 00:14:28.267 18:45:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:28.267 18:45:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:28.267 18:45:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:28.267 18:45:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:14:28.268 18:45:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:28.268 18:45:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:28.268 18:45:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:28.268 18:45:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:14:28.268 18:45:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:28.268 18:45:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:28.268 18:45:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:28.268 1+0 records in 00:14:28.268 1+0 records out 00:14:28.268 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000316836 s, 12.9 MB/s 00:14:28.268 18:45:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:28.268 18:45:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:14:28.268 18:45:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:28.268 18:45:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:28.268 18:45:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:14:28.268 18:45:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:28.268 18:45:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:28.268 18:45:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:14:28.528 /dev/nbd1 00:14:28.528 18:45:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:28.528 18:45:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:28.528 18:45:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:28.528 18:45:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:14:28.528 18:45:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:28.528 18:45:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:28.528 18:45:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:28.528 18:45:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:14:28.528 18:45:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:28.528 18:45:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:28.528 18:45:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:28.528 1+0 records in 00:14:28.528 1+0 records out 00:14:28.528 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000449704 s, 9.1 MB/s 00:14:28.528 18:45:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:28.528 18:45:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:14:28.528 18:45:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:28.528 18:45:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:28.528 18:45:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:14:28.528 18:45:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:28.528 18:45:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:28.528 18:45:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:14:28.528 18:45:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:14:28.528 18:45:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:28.528 18:45:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:28.528 18:45:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:28.528 18:45:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:14:28.528 18:45:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:28.528 18:45:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:28.788 18:45:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:28.788 18:45:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:28.788 18:45:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:28.788 18:45:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:28.788 18:45:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:28.788 18:45:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:28.788 18:45:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:28.788 18:45:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:28.788 18:45:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:28.788 18:45:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:29.048 18:45:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:29.048 18:45:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:29.048 18:45:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:29.048 18:45:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:29.048 18:45:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:29.048 18:45:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:29.048 18:45:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:29.048 18:45:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:29.048 18:45:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:14:29.048 18:45:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 96930 00:14:29.048 18:45:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 96930 ']' 00:14:29.048 18:45:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 96930 00:14:29.048 18:45:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:14:29.048 18:45:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:29.048 18:45:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 96930 00:14:29.048 18:45:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:29.048 18:45:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:29.048 18:45:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 96930' 00:14:29.048 killing process with pid 96930 00:14:29.048 18:45:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 96930 00:14:29.048 Received shutdown signal, test time was about 60.000000 seconds 00:14:29.048 00:14:29.048 Latency(us) 00:14:29.048 [2024-12-15T18:45:29.489Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:29.048 [2024-12-15T18:45:29.489Z] =================================================================================================================== 00:14:29.048 [2024-12-15T18:45:29.489Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:29.048 [2024-12-15 18:45:29.320862] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:29.048 18:45:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 96930 00:14:29.048 [2024-12-15 18:45:29.372386] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:29.309 18:45:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:14:29.309 00:14:29.309 real 0m18.190s 00:14:29.309 user 0m22.045s 00:14:29.309 sys 0m2.164s 00:14:29.309 ************************************ 00:14:29.309 END TEST raid5f_rebuild_test 00:14:29.309 ************************************ 00:14:29.309 18:45:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:29.309 18:45:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.309 18:45:29 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false true 00:14:29.309 18:45:29 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:14:29.309 18:45:29 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:29.309 18:45:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:29.309 ************************************ 00:14:29.309 START TEST raid5f_rebuild_test_sb 00:14:29.309 ************************************ 00:14:29.309 18:45:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 true false true 00:14:29.309 18:45:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:14:29.309 18:45:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:14:29.309 18:45:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:14:29.309 18:45:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:14:29.309 18:45:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:29.309 18:45:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:29.309 18:45:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:29.309 18:45:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:29.309 18:45:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:29.309 18:45:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:29.309 18:45:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:29.309 18:45:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:29.309 18:45:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:29.309 18:45:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:29.309 18:45:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:29.309 18:45:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:29.309 18:45:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:14:29.309 18:45:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:29.309 18:45:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:29.309 18:45:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:29.309 18:45:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:29.309 18:45:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:29.309 18:45:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:29.309 18:45:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:29.309 18:45:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:29.309 18:45:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:29.309 18:45:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:14:29.309 18:45:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:14:29.309 18:45:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:14:29.309 18:45:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:14:29.309 18:45:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:14:29.309 18:45:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:14:29.309 18:45:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=97436 00:14:29.309 18:45:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:29.309 18:45:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 97436 00:14:29.309 18:45:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 97436 ']' 00:14:29.309 18:45:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:29.309 18:45:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:29.309 18:45:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:29.309 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:29.309 18:45:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:29.309 18:45:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.309 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:29.309 Zero copy mechanism will not be used. 00:14:29.309 [2024-12-15 18:45:29.747132] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:14:29.309 [2024-12-15 18:45:29.747253] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97436 ] 00:14:29.569 [2024-12-15 18:45:29.915079] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:29.569 [2024-12-15 18:45:29.940633] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:14:29.569 [2024-12-15 18:45:29.983197] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:29.569 [2024-12-15 18:45:29.983235] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:30.139 18:45:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:30.139 18:45:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:14:30.139 18:45:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:30.139 18:45:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:30.139 18:45:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.139 18:45:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.139 BaseBdev1_malloc 00:14:30.139 18:45:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.139 18:45:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:30.139 18:45:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.139 18:45:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.139 [2024-12-15 18:45:30.574549] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:30.139 [2024-12-15 18:45:30.574613] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:30.139 [2024-12-15 18:45:30.574658] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:30.139 [2024-12-15 18:45:30.574677] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:30.139 [2024-12-15 18:45:30.576922] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:30.139 [2024-12-15 18:45:30.577010] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:30.399 BaseBdev1 00:14:30.399 18:45:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.399 18:45:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:30.399 18:45:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:30.399 18:45:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.399 18:45:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.399 BaseBdev2_malloc 00:14:30.399 18:45:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.399 18:45:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:30.399 18:45:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.399 18:45:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.399 [2024-12-15 18:45:30.599137] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:30.399 [2024-12-15 18:45:30.599186] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:30.399 [2024-12-15 18:45:30.599220] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:30.399 [2024-12-15 18:45:30.599228] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:30.399 [2024-12-15 18:45:30.601315] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:30.399 [2024-12-15 18:45:30.601353] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:30.399 BaseBdev2 00:14:30.399 18:45:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.399 18:45:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:30.399 18:45:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:30.399 18:45:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.399 18:45:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.399 BaseBdev3_malloc 00:14:30.399 18:45:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.399 18:45:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:30.399 18:45:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.399 18:45:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.399 [2024-12-15 18:45:30.627539] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:30.399 [2024-12-15 18:45:30.627588] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:30.399 [2024-12-15 18:45:30.627612] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:30.399 [2024-12-15 18:45:30.627620] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:30.399 [2024-12-15 18:45:30.629695] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:30.399 [2024-12-15 18:45:30.629729] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:30.399 BaseBdev3 00:14:30.400 18:45:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.400 18:45:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:30.400 18:45:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:30.400 18:45:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.400 18:45:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.400 BaseBdev4_malloc 00:14:30.400 18:45:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.400 18:45:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:14:30.400 18:45:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.400 18:45:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.400 [2024-12-15 18:45:30.673926] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:14:30.400 [2024-12-15 18:45:30.674008] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:30.400 [2024-12-15 18:45:30.674049] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:30.400 [2024-12-15 18:45:30.674064] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:30.400 [2024-12-15 18:45:30.677435] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:30.400 [2024-12-15 18:45:30.677554] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:30.400 BaseBdev4 00:14:30.400 18:45:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.400 18:45:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:30.400 18:45:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.400 18:45:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.400 spare_malloc 00:14:30.400 18:45:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.400 18:45:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:30.400 18:45:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.400 18:45:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.400 spare_delay 00:14:30.400 18:45:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.400 18:45:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:30.400 18:45:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.400 18:45:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.400 [2024-12-15 18:45:30.715073] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:30.400 [2024-12-15 18:45:30.715117] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:30.400 [2024-12-15 18:45:30.715151] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:14:30.400 [2024-12-15 18:45:30.715159] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:30.400 [2024-12-15 18:45:30.717208] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:30.400 [2024-12-15 18:45:30.717244] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:30.400 spare 00:14:30.400 18:45:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.400 18:45:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:14:30.400 18:45:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.400 18:45:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.400 [2024-12-15 18:45:30.727117] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:30.400 [2024-12-15 18:45:30.728989] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:30.400 [2024-12-15 18:45:30.729052] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:30.400 [2024-12-15 18:45:30.729091] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:30.400 [2024-12-15 18:45:30.729256] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:14:30.400 [2024-12-15 18:45:30.729267] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:30.400 [2024-12-15 18:45:30.729486] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:30.400 [2024-12-15 18:45:30.729922] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:14:30.400 [2024-12-15 18:45:30.729936] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:14:30.400 [2024-12-15 18:45:30.730066] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:30.400 18:45:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.400 18:45:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:14:30.400 18:45:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:30.400 18:45:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:30.400 18:45:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:30.400 18:45:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:30.400 18:45:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:30.400 18:45:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:30.400 18:45:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:30.400 18:45:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:30.400 18:45:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:30.400 18:45:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.400 18:45:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:30.400 18:45:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.400 18:45:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.400 18:45:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.400 18:45:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:30.400 "name": "raid_bdev1", 00:14:30.400 "uuid": "2d0da511-ddcd-40b6-8c57-17628ac40775", 00:14:30.400 "strip_size_kb": 64, 00:14:30.400 "state": "online", 00:14:30.400 "raid_level": "raid5f", 00:14:30.400 "superblock": true, 00:14:30.400 "num_base_bdevs": 4, 00:14:30.400 "num_base_bdevs_discovered": 4, 00:14:30.400 "num_base_bdevs_operational": 4, 00:14:30.400 "base_bdevs_list": [ 00:14:30.400 { 00:14:30.400 "name": "BaseBdev1", 00:14:30.400 "uuid": "816932c9-edaf-5a63-8b13-067aa17fea82", 00:14:30.400 "is_configured": true, 00:14:30.400 "data_offset": 2048, 00:14:30.400 "data_size": 63488 00:14:30.400 }, 00:14:30.400 { 00:14:30.400 "name": "BaseBdev2", 00:14:30.400 "uuid": "a54e122f-adcc-5f87-936c-aeee26aa6adb", 00:14:30.400 "is_configured": true, 00:14:30.400 "data_offset": 2048, 00:14:30.400 "data_size": 63488 00:14:30.400 }, 00:14:30.400 { 00:14:30.400 "name": "BaseBdev3", 00:14:30.400 "uuid": "24778cdb-f218-565e-a3a6-f38fa64a0a79", 00:14:30.400 "is_configured": true, 00:14:30.400 "data_offset": 2048, 00:14:30.400 "data_size": 63488 00:14:30.400 }, 00:14:30.400 { 00:14:30.400 "name": "BaseBdev4", 00:14:30.400 "uuid": "7baa9661-c7fe-53ac-a89c-1e55b90ce0fc", 00:14:30.400 "is_configured": true, 00:14:30.400 "data_offset": 2048, 00:14:30.400 "data_size": 63488 00:14:30.400 } 00:14:30.400 ] 00:14:30.400 }' 00:14:30.400 18:45:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:30.400 18:45:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.970 18:45:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:30.970 18:45:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:30.970 18:45:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.970 18:45:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.970 [2024-12-15 18:45:31.167223] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:30.970 18:45:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.970 18:45:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=190464 00:14:30.970 18:45:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.970 18:45:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:30.970 18:45:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.970 18:45:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.970 18:45:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.970 18:45:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:14:30.970 18:45:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:14:30.970 18:45:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:14:30.970 18:45:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:14:30.970 18:45:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:14:30.970 18:45:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:30.970 18:45:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:14:30.970 18:45:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:30.970 18:45:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:30.970 18:45:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:30.970 18:45:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:14:30.970 18:45:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:30.970 18:45:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:30.970 18:45:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:14:31.230 [2024-12-15 18:45:31.438655] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:31.230 /dev/nbd0 00:14:31.230 18:45:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:31.230 18:45:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:31.230 18:45:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:31.230 18:45:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:14:31.230 18:45:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:31.230 18:45:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:31.230 18:45:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:31.230 18:45:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:14:31.230 18:45:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:31.230 18:45:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:31.230 18:45:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:31.230 1+0 records in 00:14:31.230 1+0 records out 00:14:31.230 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000287985 s, 14.2 MB/s 00:14:31.230 18:45:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:31.230 18:45:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:14:31.230 18:45:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:31.230 18:45:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:31.230 18:45:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:14:31.230 18:45:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:31.230 18:45:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:31.230 18:45:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:14:31.230 18:45:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:14:31.230 18:45:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 192 00:14:31.230 18:45:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:14:31.800 496+0 records in 00:14:31.800 496+0 records out 00:14:31.800 97517568 bytes (98 MB, 93 MiB) copied, 0.638169 s, 153 MB/s 00:14:31.800 18:45:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:31.800 18:45:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:31.800 18:45:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:31.800 18:45:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:31.800 18:45:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:14:31.800 18:45:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:31.800 18:45:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:32.060 18:45:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:32.060 [2024-12-15 18:45:32.407480] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:32.060 18:45:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:32.060 18:45:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:32.060 18:45:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:32.060 18:45:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:32.060 18:45:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:32.060 18:45:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:32.060 18:45:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:32.060 18:45:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:32.060 18:45:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.060 18:45:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.060 [2024-12-15 18:45:32.423541] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:32.060 18:45:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.060 18:45:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:32.060 18:45:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:32.060 18:45:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:32.060 18:45:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:32.060 18:45:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:32.060 18:45:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:32.060 18:45:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:32.060 18:45:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:32.060 18:45:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:32.060 18:45:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:32.060 18:45:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:32.060 18:45:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:32.060 18:45:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.060 18:45:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.060 18:45:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.060 18:45:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:32.060 "name": "raid_bdev1", 00:14:32.060 "uuid": "2d0da511-ddcd-40b6-8c57-17628ac40775", 00:14:32.060 "strip_size_kb": 64, 00:14:32.060 "state": "online", 00:14:32.060 "raid_level": "raid5f", 00:14:32.060 "superblock": true, 00:14:32.060 "num_base_bdevs": 4, 00:14:32.060 "num_base_bdevs_discovered": 3, 00:14:32.060 "num_base_bdevs_operational": 3, 00:14:32.060 "base_bdevs_list": [ 00:14:32.060 { 00:14:32.060 "name": null, 00:14:32.060 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:32.060 "is_configured": false, 00:14:32.060 "data_offset": 0, 00:14:32.060 "data_size": 63488 00:14:32.060 }, 00:14:32.060 { 00:14:32.060 "name": "BaseBdev2", 00:14:32.060 "uuid": "a54e122f-adcc-5f87-936c-aeee26aa6adb", 00:14:32.060 "is_configured": true, 00:14:32.060 "data_offset": 2048, 00:14:32.060 "data_size": 63488 00:14:32.060 }, 00:14:32.060 { 00:14:32.060 "name": "BaseBdev3", 00:14:32.060 "uuid": "24778cdb-f218-565e-a3a6-f38fa64a0a79", 00:14:32.060 "is_configured": true, 00:14:32.060 "data_offset": 2048, 00:14:32.060 "data_size": 63488 00:14:32.060 }, 00:14:32.060 { 00:14:32.060 "name": "BaseBdev4", 00:14:32.060 "uuid": "7baa9661-c7fe-53ac-a89c-1e55b90ce0fc", 00:14:32.060 "is_configured": true, 00:14:32.060 "data_offset": 2048, 00:14:32.060 "data_size": 63488 00:14:32.060 } 00:14:32.060 ] 00:14:32.060 }' 00:14:32.060 18:45:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:32.060 18:45:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.631 18:45:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:32.631 18:45:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.631 18:45:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.631 [2024-12-15 18:45:32.890790] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:32.631 [2024-12-15 18:45:32.895211] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002a8b0 00:14:32.631 18:45:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.631 18:45:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:32.631 [2024-12-15 18:45:32.897595] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:33.570 18:45:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:33.570 18:45:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:33.570 18:45:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:33.570 18:45:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:33.570 18:45:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:33.570 18:45:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:33.570 18:45:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.570 18:45:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:33.570 18:45:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.570 18:45:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.570 18:45:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:33.570 "name": "raid_bdev1", 00:14:33.570 "uuid": "2d0da511-ddcd-40b6-8c57-17628ac40775", 00:14:33.570 "strip_size_kb": 64, 00:14:33.570 "state": "online", 00:14:33.570 "raid_level": "raid5f", 00:14:33.570 "superblock": true, 00:14:33.570 "num_base_bdevs": 4, 00:14:33.570 "num_base_bdevs_discovered": 4, 00:14:33.570 "num_base_bdevs_operational": 4, 00:14:33.570 "process": { 00:14:33.570 "type": "rebuild", 00:14:33.570 "target": "spare", 00:14:33.570 "progress": { 00:14:33.570 "blocks": 19200, 00:14:33.570 "percent": 10 00:14:33.570 } 00:14:33.570 }, 00:14:33.570 "base_bdevs_list": [ 00:14:33.570 { 00:14:33.570 "name": "spare", 00:14:33.570 "uuid": "29441ee7-5918-55e4-abe8-1bdaed911108", 00:14:33.570 "is_configured": true, 00:14:33.570 "data_offset": 2048, 00:14:33.570 "data_size": 63488 00:14:33.570 }, 00:14:33.570 { 00:14:33.570 "name": "BaseBdev2", 00:14:33.570 "uuid": "a54e122f-adcc-5f87-936c-aeee26aa6adb", 00:14:33.570 "is_configured": true, 00:14:33.570 "data_offset": 2048, 00:14:33.570 "data_size": 63488 00:14:33.570 }, 00:14:33.570 { 00:14:33.570 "name": "BaseBdev3", 00:14:33.570 "uuid": "24778cdb-f218-565e-a3a6-f38fa64a0a79", 00:14:33.570 "is_configured": true, 00:14:33.570 "data_offset": 2048, 00:14:33.570 "data_size": 63488 00:14:33.570 }, 00:14:33.570 { 00:14:33.570 "name": "BaseBdev4", 00:14:33.570 "uuid": "7baa9661-c7fe-53ac-a89c-1e55b90ce0fc", 00:14:33.570 "is_configured": true, 00:14:33.570 "data_offset": 2048, 00:14:33.570 "data_size": 63488 00:14:33.570 } 00:14:33.570 ] 00:14:33.570 }' 00:14:33.570 18:45:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:33.570 18:45:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:33.570 18:45:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:33.830 18:45:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:33.830 18:45:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:33.830 18:45:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.830 18:45:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.830 [2024-12-15 18:45:34.037826] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:33.830 [2024-12-15 18:45:34.104649] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:33.830 [2024-12-15 18:45:34.104800] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:33.830 [2024-12-15 18:45:34.104862] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:33.830 [2024-12-15 18:45:34.104886] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:33.830 18:45:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.830 18:45:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:33.830 18:45:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:33.830 18:45:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:33.830 18:45:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:33.830 18:45:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:33.830 18:45:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:33.830 18:45:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:33.830 18:45:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:33.830 18:45:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:33.830 18:45:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:33.830 18:45:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:33.830 18:45:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:33.830 18:45:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.830 18:45:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.830 18:45:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.830 18:45:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:33.830 "name": "raid_bdev1", 00:14:33.831 "uuid": "2d0da511-ddcd-40b6-8c57-17628ac40775", 00:14:33.831 "strip_size_kb": 64, 00:14:33.831 "state": "online", 00:14:33.831 "raid_level": "raid5f", 00:14:33.831 "superblock": true, 00:14:33.831 "num_base_bdevs": 4, 00:14:33.831 "num_base_bdevs_discovered": 3, 00:14:33.831 "num_base_bdevs_operational": 3, 00:14:33.831 "base_bdevs_list": [ 00:14:33.831 { 00:14:33.831 "name": null, 00:14:33.831 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:33.831 "is_configured": false, 00:14:33.831 "data_offset": 0, 00:14:33.831 "data_size": 63488 00:14:33.831 }, 00:14:33.831 { 00:14:33.831 "name": "BaseBdev2", 00:14:33.831 "uuid": "a54e122f-adcc-5f87-936c-aeee26aa6adb", 00:14:33.831 "is_configured": true, 00:14:33.831 "data_offset": 2048, 00:14:33.831 "data_size": 63488 00:14:33.831 }, 00:14:33.831 { 00:14:33.831 "name": "BaseBdev3", 00:14:33.831 "uuid": "24778cdb-f218-565e-a3a6-f38fa64a0a79", 00:14:33.831 "is_configured": true, 00:14:33.831 "data_offset": 2048, 00:14:33.831 "data_size": 63488 00:14:33.831 }, 00:14:33.831 { 00:14:33.831 "name": "BaseBdev4", 00:14:33.831 "uuid": "7baa9661-c7fe-53ac-a89c-1e55b90ce0fc", 00:14:33.831 "is_configured": true, 00:14:33.831 "data_offset": 2048, 00:14:33.831 "data_size": 63488 00:14:33.831 } 00:14:33.831 ] 00:14:33.831 }' 00:14:33.831 18:45:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:33.831 18:45:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.400 18:45:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:34.400 18:45:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:34.400 18:45:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:34.400 18:45:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:34.400 18:45:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:34.400 18:45:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:34.400 18:45:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:34.400 18:45:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.400 18:45:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.400 18:45:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.400 18:45:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:34.400 "name": "raid_bdev1", 00:14:34.400 "uuid": "2d0da511-ddcd-40b6-8c57-17628ac40775", 00:14:34.400 "strip_size_kb": 64, 00:14:34.400 "state": "online", 00:14:34.400 "raid_level": "raid5f", 00:14:34.400 "superblock": true, 00:14:34.400 "num_base_bdevs": 4, 00:14:34.400 "num_base_bdevs_discovered": 3, 00:14:34.400 "num_base_bdevs_operational": 3, 00:14:34.400 "base_bdevs_list": [ 00:14:34.400 { 00:14:34.400 "name": null, 00:14:34.400 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:34.400 "is_configured": false, 00:14:34.400 "data_offset": 0, 00:14:34.400 "data_size": 63488 00:14:34.400 }, 00:14:34.400 { 00:14:34.400 "name": "BaseBdev2", 00:14:34.400 "uuid": "a54e122f-adcc-5f87-936c-aeee26aa6adb", 00:14:34.400 "is_configured": true, 00:14:34.400 "data_offset": 2048, 00:14:34.400 "data_size": 63488 00:14:34.400 }, 00:14:34.400 { 00:14:34.400 "name": "BaseBdev3", 00:14:34.400 "uuid": "24778cdb-f218-565e-a3a6-f38fa64a0a79", 00:14:34.400 "is_configured": true, 00:14:34.400 "data_offset": 2048, 00:14:34.400 "data_size": 63488 00:14:34.400 }, 00:14:34.400 { 00:14:34.400 "name": "BaseBdev4", 00:14:34.400 "uuid": "7baa9661-c7fe-53ac-a89c-1e55b90ce0fc", 00:14:34.400 "is_configured": true, 00:14:34.400 "data_offset": 2048, 00:14:34.400 "data_size": 63488 00:14:34.400 } 00:14:34.400 ] 00:14:34.400 }' 00:14:34.400 18:45:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:34.400 18:45:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:34.400 18:45:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:34.400 18:45:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:34.400 18:45:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:34.400 18:45:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.400 18:45:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.400 [2024-12-15 18:45:34.689655] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:34.400 [2024-12-15 18:45:34.693865] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002a980 00:14:34.400 18:45:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.400 18:45:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:34.400 [2024-12-15 18:45:34.696073] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:35.338 18:45:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:35.338 18:45:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:35.338 18:45:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:35.338 18:45:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:35.338 18:45:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:35.338 18:45:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:35.338 18:45:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:35.338 18:45:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.338 18:45:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.338 18:45:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.338 18:45:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:35.338 "name": "raid_bdev1", 00:14:35.338 "uuid": "2d0da511-ddcd-40b6-8c57-17628ac40775", 00:14:35.338 "strip_size_kb": 64, 00:14:35.338 "state": "online", 00:14:35.338 "raid_level": "raid5f", 00:14:35.338 "superblock": true, 00:14:35.338 "num_base_bdevs": 4, 00:14:35.338 "num_base_bdevs_discovered": 4, 00:14:35.338 "num_base_bdevs_operational": 4, 00:14:35.338 "process": { 00:14:35.338 "type": "rebuild", 00:14:35.338 "target": "spare", 00:14:35.338 "progress": { 00:14:35.338 "blocks": 19200, 00:14:35.338 "percent": 10 00:14:35.338 } 00:14:35.338 }, 00:14:35.338 "base_bdevs_list": [ 00:14:35.338 { 00:14:35.338 "name": "spare", 00:14:35.338 "uuid": "29441ee7-5918-55e4-abe8-1bdaed911108", 00:14:35.338 "is_configured": true, 00:14:35.338 "data_offset": 2048, 00:14:35.338 "data_size": 63488 00:14:35.338 }, 00:14:35.338 { 00:14:35.338 "name": "BaseBdev2", 00:14:35.338 "uuid": "a54e122f-adcc-5f87-936c-aeee26aa6adb", 00:14:35.338 "is_configured": true, 00:14:35.338 "data_offset": 2048, 00:14:35.338 "data_size": 63488 00:14:35.338 }, 00:14:35.338 { 00:14:35.338 "name": "BaseBdev3", 00:14:35.338 "uuid": "24778cdb-f218-565e-a3a6-f38fa64a0a79", 00:14:35.338 "is_configured": true, 00:14:35.338 "data_offset": 2048, 00:14:35.338 "data_size": 63488 00:14:35.338 }, 00:14:35.338 { 00:14:35.338 "name": "BaseBdev4", 00:14:35.338 "uuid": "7baa9661-c7fe-53ac-a89c-1e55b90ce0fc", 00:14:35.338 "is_configured": true, 00:14:35.338 "data_offset": 2048, 00:14:35.338 "data_size": 63488 00:14:35.338 } 00:14:35.338 ] 00:14:35.338 }' 00:14:35.338 18:45:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:35.598 18:45:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:35.598 18:45:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:35.598 18:45:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:35.598 18:45:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:14:35.598 18:45:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:14:35.598 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:14:35.598 18:45:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:14:35.598 18:45:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:14:35.598 18:45:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=532 00:14:35.598 18:45:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:35.598 18:45:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:35.598 18:45:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:35.598 18:45:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:35.598 18:45:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:35.598 18:45:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:35.598 18:45:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:35.598 18:45:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.598 18:45:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.598 18:45:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:35.598 18:45:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.598 18:45:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:35.598 "name": "raid_bdev1", 00:14:35.598 "uuid": "2d0da511-ddcd-40b6-8c57-17628ac40775", 00:14:35.598 "strip_size_kb": 64, 00:14:35.598 "state": "online", 00:14:35.598 "raid_level": "raid5f", 00:14:35.598 "superblock": true, 00:14:35.598 "num_base_bdevs": 4, 00:14:35.598 "num_base_bdevs_discovered": 4, 00:14:35.598 "num_base_bdevs_operational": 4, 00:14:35.598 "process": { 00:14:35.598 "type": "rebuild", 00:14:35.598 "target": "spare", 00:14:35.598 "progress": { 00:14:35.598 "blocks": 21120, 00:14:35.598 "percent": 11 00:14:35.598 } 00:14:35.598 }, 00:14:35.598 "base_bdevs_list": [ 00:14:35.598 { 00:14:35.598 "name": "spare", 00:14:35.598 "uuid": "29441ee7-5918-55e4-abe8-1bdaed911108", 00:14:35.598 "is_configured": true, 00:14:35.598 "data_offset": 2048, 00:14:35.598 "data_size": 63488 00:14:35.598 }, 00:14:35.598 { 00:14:35.598 "name": "BaseBdev2", 00:14:35.598 "uuid": "a54e122f-adcc-5f87-936c-aeee26aa6adb", 00:14:35.598 "is_configured": true, 00:14:35.598 "data_offset": 2048, 00:14:35.598 "data_size": 63488 00:14:35.598 }, 00:14:35.598 { 00:14:35.598 "name": "BaseBdev3", 00:14:35.598 "uuid": "24778cdb-f218-565e-a3a6-f38fa64a0a79", 00:14:35.598 "is_configured": true, 00:14:35.598 "data_offset": 2048, 00:14:35.598 "data_size": 63488 00:14:35.598 }, 00:14:35.598 { 00:14:35.598 "name": "BaseBdev4", 00:14:35.598 "uuid": "7baa9661-c7fe-53ac-a89c-1e55b90ce0fc", 00:14:35.598 "is_configured": true, 00:14:35.598 "data_offset": 2048, 00:14:35.598 "data_size": 63488 00:14:35.598 } 00:14:35.598 ] 00:14:35.598 }' 00:14:35.598 18:45:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:35.598 18:45:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:35.598 18:45:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:35.598 18:45:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:35.598 18:45:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:36.980 18:45:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:36.980 18:45:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:36.980 18:45:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:36.980 18:45:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:36.980 18:45:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:36.980 18:45:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:36.980 18:45:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:36.980 18:45:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.980 18:45:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.980 18:45:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:36.980 18:45:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.980 18:45:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:36.980 "name": "raid_bdev1", 00:14:36.980 "uuid": "2d0da511-ddcd-40b6-8c57-17628ac40775", 00:14:36.980 "strip_size_kb": 64, 00:14:36.980 "state": "online", 00:14:36.980 "raid_level": "raid5f", 00:14:36.980 "superblock": true, 00:14:36.980 "num_base_bdevs": 4, 00:14:36.980 "num_base_bdevs_discovered": 4, 00:14:36.980 "num_base_bdevs_operational": 4, 00:14:36.980 "process": { 00:14:36.980 "type": "rebuild", 00:14:36.980 "target": "spare", 00:14:36.980 "progress": { 00:14:36.980 "blocks": 42240, 00:14:36.980 "percent": 22 00:14:36.980 } 00:14:36.980 }, 00:14:36.980 "base_bdevs_list": [ 00:14:36.980 { 00:14:36.980 "name": "spare", 00:14:36.980 "uuid": "29441ee7-5918-55e4-abe8-1bdaed911108", 00:14:36.980 "is_configured": true, 00:14:36.980 "data_offset": 2048, 00:14:36.980 "data_size": 63488 00:14:36.980 }, 00:14:36.980 { 00:14:36.980 "name": "BaseBdev2", 00:14:36.980 "uuid": "a54e122f-adcc-5f87-936c-aeee26aa6adb", 00:14:36.980 "is_configured": true, 00:14:36.980 "data_offset": 2048, 00:14:36.980 "data_size": 63488 00:14:36.980 }, 00:14:36.980 { 00:14:36.980 "name": "BaseBdev3", 00:14:36.980 "uuid": "24778cdb-f218-565e-a3a6-f38fa64a0a79", 00:14:36.980 "is_configured": true, 00:14:36.980 "data_offset": 2048, 00:14:36.980 "data_size": 63488 00:14:36.980 }, 00:14:36.980 { 00:14:36.980 "name": "BaseBdev4", 00:14:36.980 "uuid": "7baa9661-c7fe-53ac-a89c-1e55b90ce0fc", 00:14:36.980 "is_configured": true, 00:14:36.980 "data_offset": 2048, 00:14:36.980 "data_size": 63488 00:14:36.980 } 00:14:36.980 ] 00:14:36.980 }' 00:14:36.981 18:45:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:36.981 18:45:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:36.981 18:45:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:36.981 18:45:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:36.981 18:45:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:37.947 18:45:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:37.947 18:45:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:37.947 18:45:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:37.947 18:45:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:37.947 18:45:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:37.947 18:45:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:37.947 18:45:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.947 18:45:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:37.947 18:45:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.947 18:45:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.947 18:45:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.947 18:45:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:37.947 "name": "raid_bdev1", 00:14:37.947 "uuid": "2d0da511-ddcd-40b6-8c57-17628ac40775", 00:14:37.947 "strip_size_kb": 64, 00:14:37.947 "state": "online", 00:14:37.947 "raid_level": "raid5f", 00:14:37.947 "superblock": true, 00:14:37.947 "num_base_bdevs": 4, 00:14:37.947 "num_base_bdevs_discovered": 4, 00:14:37.947 "num_base_bdevs_operational": 4, 00:14:37.947 "process": { 00:14:37.947 "type": "rebuild", 00:14:37.947 "target": "spare", 00:14:37.947 "progress": { 00:14:37.947 "blocks": 65280, 00:14:37.947 "percent": 34 00:14:37.947 } 00:14:37.947 }, 00:14:37.947 "base_bdevs_list": [ 00:14:37.947 { 00:14:37.947 "name": "spare", 00:14:37.947 "uuid": "29441ee7-5918-55e4-abe8-1bdaed911108", 00:14:37.947 "is_configured": true, 00:14:37.947 "data_offset": 2048, 00:14:37.947 "data_size": 63488 00:14:37.947 }, 00:14:37.947 { 00:14:37.948 "name": "BaseBdev2", 00:14:37.948 "uuid": "a54e122f-adcc-5f87-936c-aeee26aa6adb", 00:14:37.948 "is_configured": true, 00:14:37.948 "data_offset": 2048, 00:14:37.948 "data_size": 63488 00:14:37.948 }, 00:14:37.948 { 00:14:37.948 "name": "BaseBdev3", 00:14:37.948 "uuid": "24778cdb-f218-565e-a3a6-f38fa64a0a79", 00:14:37.948 "is_configured": true, 00:14:37.948 "data_offset": 2048, 00:14:37.948 "data_size": 63488 00:14:37.948 }, 00:14:37.948 { 00:14:37.948 "name": "BaseBdev4", 00:14:37.948 "uuid": "7baa9661-c7fe-53ac-a89c-1e55b90ce0fc", 00:14:37.948 "is_configured": true, 00:14:37.948 "data_offset": 2048, 00:14:37.948 "data_size": 63488 00:14:37.948 } 00:14:37.948 ] 00:14:37.948 }' 00:14:37.948 18:45:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:37.948 18:45:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:37.948 18:45:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:37.948 18:45:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:37.948 18:45:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:38.887 18:45:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:38.887 18:45:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:38.887 18:45:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:38.887 18:45:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:38.887 18:45:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:38.887 18:45:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:38.887 18:45:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.887 18:45:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.887 18:45:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.887 18:45:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:38.887 18:45:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.146 18:45:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:39.146 "name": "raid_bdev1", 00:14:39.146 "uuid": "2d0da511-ddcd-40b6-8c57-17628ac40775", 00:14:39.146 "strip_size_kb": 64, 00:14:39.146 "state": "online", 00:14:39.146 "raid_level": "raid5f", 00:14:39.146 "superblock": true, 00:14:39.146 "num_base_bdevs": 4, 00:14:39.146 "num_base_bdevs_discovered": 4, 00:14:39.146 "num_base_bdevs_operational": 4, 00:14:39.146 "process": { 00:14:39.146 "type": "rebuild", 00:14:39.146 "target": "spare", 00:14:39.146 "progress": { 00:14:39.146 "blocks": 86400, 00:14:39.146 "percent": 45 00:14:39.146 } 00:14:39.146 }, 00:14:39.146 "base_bdevs_list": [ 00:14:39.146 { 00:14:39.146 "name": "spare", 00:14:39.146 "uuid": "29441ee7-5918-55e4-abe8-1bdaed911108", 00:14:39.146 "is_configured": true, 00:14:39.146 "data_offset": 2048, 00:14:39.146 "data_size": 63488 00:14:39.146 }, 00:14:39.146 { 00:14:39.146 "name": "BaseBdev2", 00:14:39.146 "uuid": "a54e122f-adcc-5f87-936c-aeee26aa6adb", 00:14:39.146 "is_configured": true, 00:14:39.146 "data_offset": 2048, 00:14:39.146 "data_size": 63488 00:14:39.146 }, 00:14:39.146 { 00:14:39.146 "name": "BaseBdev3", 00:14:39.146 "uuid": "24778cdb-f218-565e-a3a6-f38fa64a0a79", 00:14:39.146 "is_configured": true, 00:14:39.146 "data_offset": 2048, 00:14:39.146 "data_size": 63488 00:14:39.146 }, 00:14:39.146 { 00:14:39.146 "name": "BaseBdev4", 00:14:39.146 "uuid": "7baa9661-c7fe-53ac-a89c-1e55b90ce0fc", 00:14:39.146 "is_configured": true, 00:14:39.146 "data_offset": 2048, 00:14:39.146 "data_size": 63488 00:14:39.146 } 00:14:39.146 ] 00:14:39.146 }' 00:14:39.146 18:45:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:39.146 18:45:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:39.146 18:45:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:39.146 18:45:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:39.146 18:45:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:40.085 18:45:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:40.085 18:45:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:40.085 18:45:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:40.086 18:45:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:40.086 18:45:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:40.086 18:45:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:40.086 18:45:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.086 18:45:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:40.086 18:45:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.086 18:45:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:40.086 18:45:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.086 18:45:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:40.086 "name": "raid_bdev1", 00:14:40.086 "uuid": "2d0da511-ddcd-40b6-8c57-17628ac40775", 00:14:40.086 "strip_size_kb": 64, 00:14:40.086 "state": "online", 00:14:40.086 "raid_level": "raid5f", 00:14:40.086 "superblock": true, 00:14:40.086 "num_base_bdevs": 4, 00:14:40.086 "num_base_bdevs_discovered": 4, 00:14:40.086 "num_base_bdevs_operational": 4, 00:14:40.086 "process": { 00:14:40.086 "type": "rebuild", 00:14:40.086 "target": "spare", 00:14:40.086 "progress": { 00:14:40.086 "blocks": 109440, 00:14:40.086 "percent": 57 00:14:40.086 } 00:14:40.086 }, 00:14:40.086 "base_bdevs_list": [ 00:14:40.086 { 00:14:40.086 "name": "spare", 00:14:40.086 "uuid": "29441ee7-5918-55e4-abe8-1bdaed911108", 00:14:40.086 "is_configured": true, 00:14:40.086 "data_offset": 2048, 00:14:40.086 "data_size": 63488 00:14:40.086 }, 00:14:40.086 { 00:14:40.086 "name": "BaseBdev2", 00:14:40.086 "uuid": "a54e122f-adcc-5f87-936c-aeee26aa6adb", 00:14:40.086 "is_configured": true, 00:14:40.086 "data_offset": 2048, 00:14:40.086 "data_size": 63488 00:14:40.086 }, 00:14:40.086 { 00:14:40.086 "name": "BaseBdev3", 00:14:40.086 "uuid": "24778cdb-f218-565e-a3a6-f38fa64a0a79", 00:14:40.086 "is_configured": true, 00:14:40.086 "data_offset": 2048, 00:14:40.086 "data_size": 63488 00:14:40.086 }, 00:14:40.086 { 00:14:40.086 "name": "BaseBdev4", 00:14:40.086 "uuid": "7baa9661-c7fe-53ac-a89c-1e55b90ce0fc", 00:14:40.086 "is_configured": true, 00:14:40.086 "data_offset": 2048, 00:14:40.086 "data_size": 63488 00:14:40.086 } 00:14:40.086 ] 00:14:40.086 }' 00:14:40.086 18:45:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:40.345 18:45:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:40.345 18:45:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:40.345 18:45:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:40.345 18:45:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:41.284 18:45:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:41.285 18:45:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:41.285 18:45:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:41.285 18:45:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:41.285 18:45:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:41.285 18:45:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:41.285 18:45:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.285 18:45:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.285 18:45:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:41.285 18:45:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:41.285 18:45:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.285 18:45:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:41.285 "name": "raid_bdev1", 00:14:41.285 "uuid": "2d0da511-ddcd-40b6-8c57-17628ac40775", 00:14:41.285 "strip_size_kb": 64, 00:14:41.285 "state": "online", 00:14:41.285 "raid_level": "raid5f", 00:14:41.285 "superblock": true, 00:14:41.285 "num_base_bdevs": 4, 00:14:41.285 "num_base_bdevs_discovered": 4, 00:14:41.285 "num_base_bdevs_operational": 4, 00:14:41.285 "process": { 00:14:41.285 "type": "rebuild", 00:14:41.285 "target": "spare", 00:14:41.285 "progress": { 00:14:41.285 "blocks": 130560, 00:14:41.285 "percent": 68 00:14:41.285 } 00:14:41.285 }, 00:14:41.285 "base_bdevs_list": [ 00:14:41.285 { 00:14:41.285 "name": "spare", 00:14:41.285 "uuid": "29441ee7-5918-55e4-abe8-1bdaed911108", 00:14:41.285 "is_configured": true, 00:14:41.285 "data_offset": 2048, 00:14:41.285 "data_size": 63488 00:14:41.285 }, 00:14:41.285 { 00:14:41.285 "name": "BaseBdev2", 00:14:41.285 "uuid": "a54e122f-adcc-5f87-936c-aeee26aa6adb", 00:14:41.285 "is_configured": true, 00:14:41.285 "data_offset": 2048, 00:14:41.285 "data_size": 63488 00:14:41.285 }, 00:14:41.285 { 00:14:41.285 "name": "BaseBdev3", 00:14:41.285 "uuid": "24778cdb-f218-565e-a3a6-f38fa64a0a79", 00:14:41.285 "is_configured": true, 00:14:41.285 "data_offset": 2048, 00:14:41.285 "data_size": 63488 00:14:41.285 }, 00:14:41.285 { 00:14:41.285 "name": "BaseBdev4", 00:14:41.285 "uuid": "7baa9661-c7fe-53ac-a89c-1e55b90ce0fc", 00:14:41.285 "is_configured": true, 00:14:41.285 "data_offset": 2048, 00:14:41.285 "data_size": 63488 00:14:41.285 } 00:14:41.285 ] 00:14:41.285 }' 00:14:41.285 18:45:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:41.285 18:45:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:41.285 18:45:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:41.285 18:45:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:41.285 18:45:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:42.665 18:45:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:42.665 18:45:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:42.665 18:45:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:42.665 18:45:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:42.665 18:45:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:42.665 18:45:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:42.665 18:45:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.665 18:45:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:42.665 18:45:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.665 18:45:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:42.665 18:45:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.665 18:45:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:42.665 "name": "raid_bdev1", 00:14:42.665 "uuid": "2d0da511-ddcd-40b6-8c57-17628ac40775", 00:14:42.665 "strip_size_kb": 64, 00:14:42.665 "state": "online", 00:14:42.665 "raid_level": "raid5f", 00:14:42.665 "superblock": true, 00:14:42.665 "num_base_bdevs": 4, 00:14:42.665 "num_base_bdevs_discovered": 4, 00:14:42.665 "num_base_bdevs_operational": 4, 00:14:42.665 "process": { 00:14:42.665 "type": "rebuild", 00:14:42.665 "target": "spare", 00:14:42.665 "progress": { 00:14:42.665 "blocks": 151680, 00:14:42.665 "percent": 79 00:14:42.665 } 00:14:42.665 }, 00:14:42.665 "base_bdevs_list": [ 00:14:42.665 { 00:14:42.665 "name": "spare", 00:14:42.665 "uuid": "29441ee7-5918-55e4-abe8-1bdaed911108", 00:14:42.665 "is_configured": true, 00:14:42.665 "data_offset": 2048, 00:14:42.665 "data_size": 63488 00:14:42.665 }, 00:14:42.665 { 00:14:42.665 "name": "BaseBdev2", 00:14:42.665 "uuid": "a54e122f-adcc-5f87-936c-aeee26aa6adb", 00:14:42.665 "is_configured": true, 00:14:42.665 "data_offset": 2048, 00:14:42.665 "data_size": 63488 00:14:42.665 }, 00:14:42.665 { 00:14:42.665 "name": "BaseBdev3", 00:14:42.665 "uuid": "24778cdb-f218-565e-a3a6-f38fa64a0a79", 00:14:42.665 "is_configured": true, 00:14:42.665 "data_offset": 2048, 00:14:42.665 "data_size": 63488 00:14:42.665 }, 00:14:42.665 { 00:14:42.665 "name": "BaseBdev4", 00:14:42.665 "uuid": "7baa9661-c7fe-53ac-a89c-1e55b90ce0fc", 00:14:42.665 "is_configured": true, 00:14:42.665 "data_offset": 2048, 00:14:42.665 "data_size": 63488 00:14:42.665 } 00:14:42.665 ] 00:14:42.665 }' 00:14:42.665 18:45:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:42.665 18:45:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:42.665 18:45:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:42.665 18:45:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:42.665 18:45:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:43.603 18:45:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:43.603 18:45:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:43.603 18:45:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:43.603 18:45:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:43.603 18:45:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:43.603 18:45:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:43.603 18:45:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.603 18:45:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.603 18:45:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:43.603 18:45:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:43.603 18:45:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.603 18:45:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:43.603 "name": "raid_bdev1", 00:14:43.603 "uuid": "2d0da511-ddcd-40b6-8c57-17628ac40775", 00:14:43.603 "strip_size_kb": 64, 00:14:43.603 "state": "online", 00:14:43.603 "raid_level": "raid5f", 00:14:43.603 "superblock": true, 00:14:43.603 "num_base_bdevs": 4, 00:14:43.603 "num_base_bdevs_discovered": 4, 00:14:43.603 "num_base_bdevs_operational": 4, 00:14:43.603 "process": { 00:14:43.603 "type": "rebuild", 00:14:43.603 "target": "spare", 00:14:43.603 "progress": { 00:14:43.603 "blocks": 174720, 00:14:43.603 "percent": 91 00:14:43.603 } 00:14:43.603 }, 00:14:43.603 "base_bdevs_list": [ 00:14:43.603 { 00:14:43.603 "name": "spare", 00:14:43.603 "uuid": "29441ee7-5918-55e4-abe8-1bdaed911108", 00:14:43.603 "is_configured": true, 00:14:43.603 "data_offset": 2048, 00:14:43.603 "data_size": 63488 00:14:43.603 }, 00:14:43.603 { 00:14:43.603 "name": "BaseBdev2", 00:14:43.603 "uuid": "a54e122f-adcc-5f87-936c-aeee26aa6adb", 00:14:43.603 "is_configured": true, 00:14:43.603 "data_offset": 2048, 00:14:43.603 "data_size": 63488 00:14:43.603 }, 00:14:43.603 { 00:14:43.603 "name": "BaseBdev3", 00:14:43.603 "uuid": "24778cdb-f218-565e-a3a6-f38fa64a0a79", 00:14:43.603 "is_configured": true, 00:14:43.603 "data_offset": 2048, 00:14:43.603 "data_size": 63488 00:14:43.603 }, 00:14:43.603 { 00:14:43.603 "name": "BaseBdev4", 00:14:43.603 "uuid": "7baa9661-c7fe-53ac-a89c-1e55b90ce0fc", 00:14:43.603 "is_configured": true, 00:14:43.603 "data_offset": 2048, 00:14:43.603 "data_size": 63488 00:14:43.603 } 00:14:43.603 ] 00:14:43.603 }' 00:14:43.603 18:45:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:43.603 18:45:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:43.603 18:45:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:43.603 18:45:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:43.603 18:45:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:44.541 [2024-12-15 18:45:44.741888] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:44.541 [2024-12-15 18:45:44.742061] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:44.541 [2024-12-15 18:45:44.742202] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:44.803 18:45:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:44.803 18:45:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:44.803 18:45:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:44.803 18:45:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:44.803 18:45:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:44.803 18:45:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:44.803 18:45:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.803 18:45:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.803 18:45:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:44.803 18:45:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.803 18:45:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.803 18:45:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:44.803 "name": "raid_bdev1", 00:14:44.803 "uuid": "2d0da511-ddcd-40b6-8c57-17628ac40775", 00:14:44.803 "strip_size_kb": 64, 00:14:44.803 "state": "online", 00:14:44.803 "raid_level": "raid5f", 00:14:44.803 "superblock": true, 00:14:44.803 "num_base_bdevs": 4, 00:14:44.803 "num_base_bdevs_discovered": 4, 00:14:44.803 "num_base_bdevs_operational": 4, 00:14:44.803 "base_bdevs_list": [ 00:14:44.803 { 00:14:44.804 "name": "spare", 00:14:44.804 "uuid": "29441ee7-5918-55e4-abe8-1bdaed911108", 00:14:44.804 "is_configured": true, 00:14:44.804 "data_offset": 2048, 00:14:44.804 "data_size": 63488 00:14:44.804 }, 00:14:44.804 { 00:14:44.804 "name": "BaseBdev2", 00:14:44.804 "uuid": "a54e122f-adcc-5f87-936c-aeee26aa6adb", 00:14:44.804 "is_configured": true, 00:14:44.804 "data_offset": 2048, 00:14:44.804 "data_size": 63488 00:14:44.804 }, 00:14:44.804 { 00:14:44.804 "name": "BaseBdev3", 00:14:44.804 "uuid": "24778cdb-f218-565e-a3a6-f38fa64a0a79", 00:14:44.804 "is_configured": true, 00:14:44.804 "data_offset": 2048, 00:14:44.804 "data_size": 63488 00:14:44.804 }, 00:14:44.804 { 00:14:44.804 "name": "BaseBdev4", 00:14:44.804 "uuid": "7baa9661-c7fe-53ac-a89c-1e55b90ce0fc", 00:14:44.804 "is_configured": true, 00:14:44.804 "data_offset": 2048, 00:14:44.804 "data_size": 63488 00:14:44.804 } 00:14:44.804 ] 00:14:44.804 }' 00:14:44.804 18:45:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:44.804 18:45:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:44.804 18:45:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:44.804 18:45:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:44.804 18:45:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:14:44.804 18:45:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:44.804 18:45:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:44.804 18:45:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:44.804 18:45:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:44.804 18:45:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:44.804 18:45:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.804 18:45:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:44.804 18:45:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.804 18:45:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.804 18:45:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.804 18:45:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:44.804 "name": "raid_bdev1", 00:14:44.804 "uuid": "2d0da511-ddcd-40b6-8c57-17628ac40775", 00:14:44.804 "strip_size_kb": 64, 00:14:44.804 "state": "online", 00:14:44.804 "raid_level": "raid5f", 00:14:44.804 "superblock": true, 00:14:44.804 "num_base_bdevs": 4, 00:14:44.804 "num_base_bdevs_discovered": 4, 00:14:44.804 "num_base_bdevs_operational": 4, 00:14:44.804 "base_bdevs_list": [ 00:14:44.804 { 00:14:44.804 "name": "spare", 00:14:44.804 "uuid": "29441ee7-5918-55e4-abe8-1bdaed911108", 00:14:44.804 "is_configured": true, 00:14:44.804 "data_offset": 2048, 00:14:44.804 "data_size": 63488 00:14:44.804 }, 00:14:44.804 { 00:14:44.804 "name": "BaseBdev2", 00:14:44.804 "uuid": "a54e122f-adcc-5f87-936c-aeee26aa6adb", 00:14:44.804 "is_configured": true, 00:14:44.804 "data_offset": 2048, 00:14:44.804 "data_size": 63488 00:14:44.804 }, 00:14:44.804 { 00:14:44.804 "name": "BaseBdev3", 00:14:44.804 "uuid": "24778cdb-f218-565e-a3a6-f38fa64a0a79", 00:14:44.804 "is_configured": true, 00:14:44.804 "data_offset": 2048, 00:14:44.804 "data_size": 63488 00:14:44.804 }, 00:14:44.804 { 00:14:44.804 "name": "BaseBdev4", 00:14:44.804 "uuid": "7baa9661-c7fe-53ac-a89c-1e55b90ce0fc", 00:14:44.804 "is_configured": true, 00:14:44.804 "data_offset": 2048, 00:14:44.804 "data_size": 63488 00:14:44.804 } 00:14:44.804 ] 00:14:44.804 }' 00:14:44.804 18:45:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:44.804 18:45:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:44.804 18:45:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:45.069 18:45:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:45.069 18:45:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:14:45.069 18:45:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:45.069 18:45:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:45.069 18:45:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:45.069 18:45:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:45.069 18:45:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:45.069 18:45:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:45.069 18:45:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:45.069 18:45:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:45.069 18:45:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:45.069 18:45:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.069 18:45:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:45.069 18:45:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.069 18:45:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.069 18:45:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.069 18:45:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:45.069 "name": "raid_bdev1", 00:14:45.069 "uuid": "2d0da511-ddcd-40b6-8c57-17628ac40775", 00:14:45.069 "strip_size_kb": 64, 00:14:45.069 "state": "online", 00:14:45.069 "raid_level": "raid5f", 00:14:45.069 "superblock": true, 00:14:45.069 "num_base_bdevs": 4, 00:14:45.069 "num_base_bdevs_discovered": 4, 00:14:45.069 "num_base_bdevs_operational": 4, 00:14:45.069 "base_bdevs_list": [ 00:14:45.069 { 00:14:45.069 "name": "spare", 00:14:45.069 "uuid": "29441ee7-5918-55e4-abe8-1bdaed911108", 00:14:45.069 "is_configured": true, 00:14:45.069 "data_offset": 2048, 00:14:45.069 "data_size": 63488 00:14:45.069 }, 00:14:45.069 { 00:14:45.069 "name": "BaseBdev2", 00:14:45.069 "uuid": "a54e122f-adcc-5f87-936c-aeee26aa6adb", 00:14:45.069 "is_configured": true, 00:14:45.069 "data_offset": 2048, 00:14:45.069 "data_size": 63488 00:14:45.069 }, 00:14:45.069 { 00:14:45.069 "name": "BaseBdev3", 00:14:45.069 "uuid": "24778cdb-f218-565e-a3a6-f38fa64a0a79", 00:14:45.069 "is_configured": true, 00:14:45.069 "data_offset": 2048, 00:14:45.069 "data_size": 63488 00:14:45.069 }, 00:14:45.069 { 00:14:45.069 "name": "BaseBdev4", 00:14:45.069 "uuid": "7baa9661-c7fe-53ac-a89c-1e55b90ce0fc", 00:14:45.069 "is_configured": true, 00:14:45.069 "data_offset": 2048, 00:14:45.069 "data_size": 63488 00:14:45.069 } 00:14:45.069 ] 00:14:45.069 }' 00:14:45.069 18:45:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:45.069 18:45:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.327 18:45:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:45.327 18:45:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.327 18:45:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.327 [2024-12-15 18:45:45.730215] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:45.327 [2024-12-15 18:45:45.730312] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:45.328 [2024-12-15 18:45:45.730430] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:45.328 [2024-12-15 18:45:45.730537] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:45.328 [2024-12-15 18:45:45.730595] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:14:45.328 18:45:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.328 18:45:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:14:45.328 18:45:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.328 18:45:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.328 18:45:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.328 18:45:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.587 18:45:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:45.587 18:45:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:45.587 18:45:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:14:45.587 18:45:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:14:45.587 18:45:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:45.587 18:45:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:14:45.587 18:45:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:45.587 18:45:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:45.587 18:45:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:45.587 18:45:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:14:45.587 18:45:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:45.587 18:45:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:45.587 18:45:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:14:45.587 /dev/nbd0 00:14:45.587 18:45:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:45.587 18:45:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:45.587 18:45:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:45.587 18:45:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:14:45.587 18:45:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:45.587 18:45:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:45.587 18:45:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:45.587 18:45:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:14:45.587 18:45:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:45.587 18:45:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:45.587 18:45:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:45.587 1+0 records in 00:14:45.587 1+0 records out 00:14:45.587 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000360519 s, 11.4 MB/s 00:14:45.587 18:45:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:45.587 18:45:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:14:45.587 18:45:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:45.587 18:45:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:45.587 18:45:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:14:45.587 18:45:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:45.587 18:45:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:45.587 18:45:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:14:45.848 /dev/nbd1 00:14:45.848 18:45:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:45.848 18:45:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:45.848 18:45:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:45.848 18:45:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:14:45.848 18:45:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:45.848 18:45:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:45.848 18:45:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:45.848 18:45:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:14:45.848 18:45:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:45.848 18:45:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:45.848 18:45:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:45.848 1+0 records in 00:14:45.848 1+0 records out 00:14:45.848 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00058668 s, 7.0 MB/s 00:14:45.848 18:45:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:45.848 18:45:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:14:45.848 18:45:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:45.848 18:45:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:45.848 18:45:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:14:45.848 18:45:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:45.848 18:45:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:45.848 18:45:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:14:46.108 18:45:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:14:46.108 18:45:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:46.108 18:45:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:46.108 18:45:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:46.108 18:45:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:14:46.108 18:45:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:46.108 18:45:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:46.108 18:45:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:46.108 18:45:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:46.108 18:45:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:46.108 18:45:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:46.108 18:45:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:46.108 18:45:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:46.108 18:45:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:46.367 18:45:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:46.367 18:45:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:46.367 18:45:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:46.367 18:45:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:46.368 18:45:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:46.368 18:45:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:46.368 18:45:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:46.368 18:45:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:46.368 18:45:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:46.368 18:45:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:46.368 18:45:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:46.368 18:45:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:14:46.368 18:45:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:14:46.368 18:45:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.368 18:45:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.368 18:45:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.368 18:45:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:46.368 18:45:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.368 18:45:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.368 [2024-12-15 18:45:46.771150] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:46.368 [2024-12-15 18:45:46.771208] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:46.368 [2024-12-15 18:45:46.771228] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:14:46.368 [2024-12-15 18:45:46.771238] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:46.368 [2024-12-15 18:45:46.773458] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:46.368 [2024-12-15 18:45:46.773544] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:46.368 [2024-12-15 18:45:46.773650] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:46.368 [2024-12-15 18:45:46.773722] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:46.368 [2024-12-15 18:45:46.773927] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:46.368 [2024-12-15 18:45:46.774067] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:46.368 [2024-12-15 18:45:46.774185] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:46.368 spare 00:14:46.368 18:45:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.368 18:45:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:14:46.368 18:45:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.368 18:45:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.627 [2024-12-15 18:45:46.874110] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:14:46.628 [2024-12-15 18:45:46.874178] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:46.628 [2024-12-15 18:45:46.874439] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049030 00:14:46.628 [2024-12-15 18:45:46.874885] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:14:46.628 [2024-12-15 18:45:46.874900] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006600 00:14:46.628 [2024-12-15 18:45:46.875093] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:46.628 18:45:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.628 18:45:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:14:46.628 18:45:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:46.628 18:45:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:46.628 18:45:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:46.628 18:45:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:46.628 18:45:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:46.628 18:45:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:46.628 18:45:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:46.628 18:45:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:46.628 18:45:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:46.628 18:45:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.628 18:45:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.628 18:45:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.628 18:45:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:46.628 18:45:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.628 18:45:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:46.628 "name": "raid_bdev1", 00:14:46.628 "uuid": "2d0da511-ddcd-40b6-8c57-17628ac40775", 00:14:46.628 "strip_size_kb": 64, 00:14:46.628 "state": "online", 00:14:46.628 "raid_level": "raid5f", 00:14:46.628 "superblock": true, 00:14:46.628 "num_base_bdevs": 4, 00:14:46.628 "num_base_bdevs_discovered": 4, 00:14:46.628 "num_base_bdevs_operational": 4, 00:14:46.628 "base_bdevs_list": [ 00:14:46.628 { 00:14:46.628 "name": "spare", 00:14:46.628 "uuid": "29441ee7-5918-55e4-abe8-1bdaed911108", 00:14:46.628 "is_configured": true, 00:14:46.628 "data_offset": 2048, 00:14:46.628 "data_size": 63488 00:14:46.628 }, 00:14:46.628 { 00:14:46.628 "name": "BaseBdev2", 00:14:46.628 "uuid": "a54e122f-adcc-5f87-936c-aeee26aa6adb", 00:14:46.628 "is_configured": true, 00:14:46.628 "data_offset": 2048, 00:14:46.628 "data_size": 63488 00:14:46.628 }, 00:14:46.628 { 00:14:46.628 "name": "BaseBdev3", 00:14:46.628 "uuid": "24778cdb-f218-565e-a3a6-f38fa64a0a79", 00:14:46.628 "is_configured": true, 00:14:46.628 "data_offset": 2048, 00:14:46.628 "data_size": 63488 00:14:46.628 }, 00:14:46.628 { 00:14:46.628 "name": "BaseBdev4", 00:14:46.628 "uuid": "7baa9661-c7fe-53ac-a89c-1e55b90ce0fc", 00:14:46.628 "is_configured": true, 00:14:46.628 "data_offset": 2048, 00:14:46.628 "data_size": 63488 00:14:46.628 } 00:14:46.628 ] 00:14:46.628 }' 00:14:46.628 18:45:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:46.628 18:45:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.197 18:45:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:47.197 18:45:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:47.197 18:45:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:47.197 18:45:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:47.197 18:45:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:47.197 18:45:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.197 18:45:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:47.197 18:45:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.197 18:45:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.197 18:45:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.197 18:45:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:47.197 "name": "raid_bdev1", 00:14:47.197 "uuid": "2d0da511-ddcd-40b6-8c57-17628ac40775", 00:14:47.197 "strip_size_kb": 64, 00:14:47.197 "state": "online", 00:14:47.197 "raid_level": "raid5f", 00:14:47.197 "superblock": true, 00:14:47.197 "num_base_bdevs": 4, 00:14:47.197 "num_base_bdevs_discovered": 4, 00:14:47.197 "num_base_bdevs_operational": 4, 00:14:47.197 "base_bdevs_list": [ 00:14:47.197 { 00:14:47.197 "name": "spare", 00:14:47.197 "uuid": "29441ee7-5918-55e4-abe8-1bdaed911108", 00:14:47.197 "is_configured": true, 00:14:47.197 "data_offset": 2048, 00:14:47.197 "data_size": 63488 00:14:47.197 }, 00:14:47.197 { 00:14:47.197 "name": "BaseBdev2", 00:14:47.197 "uuid": "a54e122f-adcc-5f87-936c-aeee26aa6adb", 00:14:47.197 "is_configured": true, 00:14:47.197 "data_offset": 2048, 00:14:47.197 "data_size": 63488 00:14:47.197 }, 00:14:47.197 { 00:14:47.197 "name": "BaseBdev3", 00:14:47.197 "uuid": "24778cdb-f218-565e-a3a6-f38fa64a0a79", 00:14:47.197 "is_configured": true, 00:14:47.197 "data_offset": 2048, 00:14:47.197 "data_size": 63488 00:14:47.197 }, 00:14:47.197 { 00:14:47.197 "name": "BaseBdev4", 00:14:47.197 "uuid": "7baa9661-c7fe-53ac-a89c-1e55b90ce0fc", 00:14:47.197 "is_configured": true, 00:14:47.197 "data_offset": 2048, 00:14:47.197 "data_size": 63488 00:14:47.197 } 00:14:47.197 ] 00:14:47.197 }' 00:14:47.197 18:45:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:47.197 18:45:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:47.197 18:45:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:47.197 18:45:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:47.197 18:45:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:14:47.197 18:45:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.197 18:45:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.197 18:45:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.197 18:45:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.197 18:45:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:14:47.197 18:45:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:47.197 18:45:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.197 18:45:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.197 [2024-12-15 18:45:47.549919] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:47.197 18:45:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.197 18:45:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:47.197 18:45:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:47.197 18:45:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:47.197 18:45:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:47.197 18:45:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:47.197 18:45:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:47.197 18:45:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:47.197 18:45:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:47.197 18:45:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:47.197 18:45:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:47.197 18:45:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:47.197 18:45:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.197 18:45:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.197 18:45:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.197 18:45:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.197 18:45:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:47.197 "name": "raid_bdev1", 00:14:47.197 "uuid": "2d0da511-ddcd-40b6-8c57-17628ac40775", 00:14:47.197 "strip_size_kb": 64, 00:14:47.197 "state": "online", 00:14:47.197 "raid_level": "raid5f", 00:14:47.197 "superblock": true, 00:14:47.197 "num_base_bdevs": 4, 00:14:47.197 "num_base_bdevs_discovered": 3, 00:14:47.197 "num_base_bdevs_operational": 3, 00:14:47.197 "base_bdevs_list": [ 00:14:47.197 { 00:14:47.197 "name": null, 00:14:47.197 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:47.197 "is_configured": false, 00:14:47.197 "data_offset": 0, 00:14:47.197 "data_size": 63488 00:14:47.197 }, 00:14:47.197 { 00:14:47.197 "name": "BaseBdev2", 00:14:47.197 "uuid": "a54e122f-adcc-5f87-936c-aeee26aa6adb", 00:14:47.197 "is_configured": true, 00:14:47.197 "data_offset": 2048, 00:14:47.197 "data_size": 63488 00:14:47.197 }, 00:14:47.197 { 00:14:47.197 "name": "BaseBdev3", 00:14:47.197 "uuid": "24778cdb-f218-565e-a3a6-f38fa64a0a79", 00:14:47.197 "is_configured": true, 00:14:47.197 "data_offset": 2048, 00:14:47.197 "data_size": 63488 00:14:47.197 }, 00:14:47.197 { 00:14:47.197 "name": "BaseBdev4", 00:14:47.197 "uuid": "7baa9661-c7fe-53ac-a89c-1e55b90ce0fc", 00:14:47.197 "is_configured": true, 00:14:47.197 "data_offset": 2048, 00:14:47.197 "data_size": 63488 00:14:47.197 } 00:14:47.197 ] 00:14:47.197 }' 00:14:47.197 18:45:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:47.197 18:45:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.766 18:45:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:47.766 18:45:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.766 18:45:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.766 [2024-12-15 18:45:47.969204] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:47.766 [2024-12-15 18:45:47.969456] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:14:47.766 [2024-12-15 18:45:47.969523] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:47.766 [2024-12-15 18:45:47.969591] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:47.766 [2024-12-15 18:45:47.973773] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049100 00:14:47.766 18:45:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.767 18:45:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:14:47.767 [2024-12-15 18:45:47.975942] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:48.704 18:45:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:48.704 18:45:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:48.704 18:45:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:48.704 18:45:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:48.704 18:45:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:48.704 18:45:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.704 18:45:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:48.704 18:45:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.704 18:45:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.704 18:45:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.704 18:45:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:48.704 "name": "raid_bdev1", 00:14:48.704 "uuid": "2d0da511-ddcd-40b6-8c57-17628ac40775", 00:14:48.704 "strip_size_kb": 64, 00:14:48.704 "state": "online", 00:14:48.704 "raid_level": "raid5f", 00:14:48.704 "superblock": true, 00:14:48.704 "num_base_bdevs": 4, 00:14:48.704 "num_base_bdevs_discovered": 4, 00:14:48.704 "num_base_bdevs_operational": 4, 00:14:48.704 "process": { 00:14:48.704 "type": "rebuild", 00:14:48.704 "target": "spare", 00:14:48.704 "progress": { 00:14:48.704 "blocks": 19200, 00:14:48.704 "percent": 10 00:14:48.704 } 00:14:48.704 }, 00:14:48.704 "base_bdevs_list": [ 00:14:48.704 { 00:14:48.704 "name": "spare", 00:14:48.704 "uuid": "29441ee7-5918-55e4-abe8-1bdaed911108", 00:14:48.704 "is_configured": true, 00:14:48.704 "data_offset": 2048, 00:14:48.704 "data_size": 63488 00:14:48.704 }, 00:14:48.704 { 00:14:48.704 "name": "BaseBdev2", 00:14:48.704 "uuid": "a54e122f-adcc-5f87-936c-aeee26aa6adb", 00:14:48.704 "is_configured": true, 00:14:48.705 "data_offset": 2048, 00:14:48.705 "data_size": 63488 00:14:48.705 }, 00:14:48.705 { 00:14:48.705 "name": "BaseBdev3", 00:14:48.705 "uuid": "24778cdb-f218-565e-a3a6-f38fa64a0a79", 00:14:48.705 "is_configured": true, 00:14:48.705 "data_offset": 2048, 00:14:48.705 "data_size": 63488 00:14:48.705 }, 00:14:48.705 { 00:14:48.705 "name": "BaseBdev4", 00:14:48.705 "uuid": "7baa9661-c7fe-53ac-a89c-1e55b90ce0fc", 00:14:48.705 "is_configured": true, 00:14:48.705 "data_offset": 2048, 00:14:48.705 "data_size": 63488 00:14:48.705 } 00:14:48.705 ] 00:14:48.705 }' 00:14:48.705 18:45:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:48.705 18:45:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:48.705 18:45:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:48.705 18:45:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:48.705 18:45:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:14:48.705 18:45:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.705 18:45:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.705 [2024-12-15 18:45:49.136496] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:48.972 [2024-12-15 18:45:49.182624] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:48.972 [2024-12-15 18:45:49.182720] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:48.972 [2024-12-15 18:45:49.182741] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:48.972 [2024-12-15 18:45:49.182749] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:48.972 18:45:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.972 18:45:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:48.972 18:45:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:48.972 18:45:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:48.972 18:45:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:48.972 18:45:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:48.972 18:45:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:48.972 18:45:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:48.972 18:45:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:48.972 18:45:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:48.972 18:45:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:48.972 18:45:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.972 18:45:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.972 18:45:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:48.972 18:45:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.972 18:45:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.972 18:45:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:48.972 "name": "raid_bdev1", 00:14:48.972 "uuid": "2d0da511-ddcd-40b6-8c57-17628ac40775", 00:14:48.972 "strip_size_kb": 64, 00:14:48.972 "state": "online", 00:14:48.972 "raid_level": "raid5f", 00:14:48.972 "superblock": true, 00:14:48.972 "num_base_bdevs": 4, 00:14:48.972 "num_base_bdevs_discovered": 3, 00:14:48.972 "num_base_bdevs_operational": 3, 00:14:48.972 "base_bdevs_list": [ 00:14:48.972 { 00:14:48.972 "name": null, 00:14:48.972 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:48.972 "is_configured": false, 00:14:48.972 "data_offset": 0, 00:14:48.972 "data_size": 63488 00:14:48.972 }, 00:14:48.972 { 00:14:48.972 "name": "BaseBdev2", 00:14:48.972 "uuid": "a54e122f-adcc-5f87-936c-aeee26aa6adb", 00:14:48.972 "is_configured": true, 00:14:48.972 "data_offset": 2048, 00:14:48.972 "data_size": 63488 00:14:48.972 }, 00:14:48.972 { 00:14:48.972 "name": "BaseBdev3", 00:14:48.972 "uuid": "24778cdb-f218-565e-a3a6-f38fa64a0a79", 00:14:48.972 "is_configured": true, 00:14:48.972 "data_offset": 2048, 00:14:48.972 "data_size": 63488 00:14:48.972 }, 00:14:48.972 { 00:14:48.972 "name": "BaseBdev4", 00:14:48.972 "uuid": "7baa9661-c7fe-53ac-a89c-1e55b90ce0fc", 00:14:48.972 "is_configured": true, 00:14:48.972 "data_offset": 2048, 00:14:48.972 "data_size": 63488 00:14:48.972 } 00:14:48.972 ] 00:14:48.972 }' 00:14:48.972 18:45:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:48.972 18:45:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.242 18:45:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:49.242 18:45:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.242 18:45:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.242 [2024-12-15 18:45:49.651169] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:49.242 [2024-12-15 18:45:49.651232] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:49.242 [2024-12-15 18:45:49.651261] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:14:49.242 [2024-12-15 18:45:49.651270] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:49.242 [2024-12-15 18:45:49.651703] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:49.242 [2024-12-15 18:45:49.651720] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:49.242 [2024-12-15 18:45:49.651826] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:49.242 [2024-12-15 18:45:49.651854] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:14:49.242 [2024-12-15 18:45:49.651869] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:49.242 [2024-12-15 18:45:49.651891] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:49.242 [2024-12-15 18:45:49.655846] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000491d0 00:14:49.242 spare 00:14:49.242 18:45:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.242 18:45:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:14:49.242 [2024-12-15 18:45:49.657940] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:50.624 18:45:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:50.624 18:45:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:50.624 18:45:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:50.624 18:45:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:50.624 18:45:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:50.624 18:45:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.624 18:45:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:50.624 18:45:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.624 18:45:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.624 18:45:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.624 18:45:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:50.624 "name": "raid_bdev1", 00:14:50.624 "uuid": "2d0da511-ddcd-40b6-8c57-17628ac40775", 00:14:50.624 "strip_size_kb": 64, 00:14:50.624 "state": "online", 00:14:50.624 "raid_level": "raid5f", 00:14:50.624 "superblock": true, 00:14:50.624 "num_base_bdevs": 4, 00:14:50.624 "num_base_bdevs_discovered": 4, 00:14:50.624 "num_base_bdevs_operational": 4, 00:14:50.624 "process": { 00:14:50.624 "type": "rebuild", 00:14:50.624 "target": "spare", 00:14:50.624 "progress": { 00:14:50.624 "blocks": 19200, 00:14:50.624 "percent": 10 00:14:50.624 } 00:14:50.624 }, 00:14:50.624 "base_bdevs_list": [ 00:14:50.624 { 00:14:50.624 "name": "spare", 00:14:50.624 "uuid": "29441ee7-5918-55e4-abe8-1bdaed911108", 00:14:50.624 "is_configured": true, 00:14:50.624 "data_offset": 2048, 00:14:50.624 "data_size": 63488 00:14:50.624 }, 00:14:50.624 { 00:14:50.624 "name": "BaseBdev2", 00:14:50.624 "uuid": "a54e122f-adcc-5f87-936c-aeee26aa6adb", 00:14:50.624 "is_configured": true, 00:14:50.624 "data_offset": 2048, 00:14:50.624 "data_size": 63488 00:14:50.624 }, 00:14:50.624 { 00:14:50.624 "name": "BaseBdev3", 00:14:50.624 "uuid": "24778cdb-f218-565e-a3a6-f38fa64a0a79", 00:14:50.624 "is_configured": true, 00:14:50.624 "data_offset": 2048, 00:14:50.624 "data_size": 63488 00:14:50.624 }, 00:14:50.624 { 00:14:50.624 "name": "BaseBdev4", 00:14:50.624 "uuid": "7baa9661-c7fe-53ac-a89c-1e55b90ce0fc", 00:14:50.624 "is_configured": true, 00:14:50.624 "data_offset": 2048, 00:14:50.624 "data_size": 63488 00:14:50.624 } 00:14:50.624 ] 00:14:50.624 }' 00:14:50.624 18:45:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:50.624 18:45:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:50.624 18:45:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:50.624 18:45:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:50.624 18:45:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:14:50.624 18:45:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.624 18:45:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.624 [2024-12-15 18:45:50.809667] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:50.624 [2024-12-15 18:45:50.863243] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:50.624 [2024-12-15 18:45:50.863296] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:50.624 [2024-12-15 18:45:50.863311] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:50.624 [2024-12-15 18:45:50.863319] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:50.624 18:45:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.624 18:45:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:50.624 18:45:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:50.624 18:45:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:50.624 18:45:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:50.624 18:45:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:50.624 18:45:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:50.624 18:45:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:50.624 18:45:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:50.624 18:45:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:50.624 18:45:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:50.624 18:45:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:50.624 18:45:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.624 18:45:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.624 18:45:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.624 18:45:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.624 18:45:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:50.624 "name": "raid_bdev1", 00:14:50.624 "uuid": "2d0da511-ddcd-40b6-8c57-17628ac40775", 00:14:50.624 "strip_size_kb": 64, 00:14:50.624 "state": "online", 00:14:50.624 "raid_level": "raid5f", 00:14:50.624 "superblock": true, 00:14:50.624 "num_base_bdevs": 4, 00:14:50.624 "num_base_bdevs_discovered": 3, 00:14:50.624 "num_base_bdevs_operational": 3, 00:14:50.624 "base_bdevs_list": [ 00:14:50.624 { 00:14:50.624 "name": null, 00:14:50.624 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:50.624 "is_configured": false, 00:14:50.624 "data_offset": 0, 00:14:50.624 "data_size": 63488 00:14:50.624 }, 00:14:50.624 { 00:14:50.624 "name": "BaseBdev2", 00:14:50.624 "uuid": "a54e122f-adcc-5f87-936c-aeee26aa6adb", 00:14:50.624 "is_configured": true, 00:14:50.624 "data_offset": 2048, 00:14:50.624 "data_size": 63488 00:14:50.624 }, 00:14:50.624 { 00:14:50.624 "name": "BaseBdev3", 00:14:50.624 "uuid": "24778cdb-f218-565e-a3a6-f38fa64a0a79", 00:14:50.624 "is_configured": true, 00:14:50.624 "data_offset": 2048, 00:14:50.624 "data_size": 63488 00:14:50.624 }, 00:14:50.624 { 00:14:50.624 "name": "BaseBdev4", 00:14:50.624 "uuid": "7baa9661-c7fe-53ac-a89c-1e55b90ce0fc", 00:14:50.624 "is_configured": true, 00:14:50.624 "data_offset": 2048, 00:14:50.624 "data_size": 63488 00:14:50.624 } 00:14:50.624 ] 00:14:50.624 }' 00:14:50.624 18:45:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:50.624 18:45:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.884 18:45:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:50.884 18:45:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:50.884 18:45:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:50.884 18:45:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:50.884 18:45:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:50.884 18:45:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.884 18:45:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.884 18:45:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.884 18:45:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:50.884 18:45:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.144 18:45:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:51.144 "name": "raid_bdev1", 00:14:51.144 "uuid": "2d0da511-ddcd-40b6-8c57-17628ac40775", 00:14:51.144 "strip_size_kb": 64, 00:14:51.144 "state": "online", 00:14:51.144 "raid_level": "raid5f", 00:14:51.144 "superblock": true, 00:14:51.144 "num_base_bdevs": 4, 00:14:51.144 "num_base_bdevs_discovered": 3, 00:14:51.144 "num_base_bdevs_operational": 3, 00:14:51.144 "base_bdevs_list": [ 00:14:51.144 { 00:14:51.144 "name": null, 00:14:51.144 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:51.144 "is_configured": false, 00:14:51.144 "data_offset": 0, 00:14:51.144 "data_size": 63488 00:14:51.144 }, 00:14:51.144 { 00:14:51.144 "name": "BaseBdev2", 00:14:51.144 "uuid": "a54e122f-adcc-5f87-936c-aeee26aa6adb", 00:14:51.144 "is_configured": true, 00:14:51.144 "data_offset": 2048, 00:14:51.144 "data_size": 63488 00:14:51.144 }, 00:14:51.144 { 00:14:51.144 "name": "BaseBdev3", 00:14:51.144 "uuid": "24778cdb-f218-565e-a3a6-f38fa64a0a79", 00:14:51.144 "is_configured": true, 00:14:51.144 "data_offset": 2048, 00:14:51.144 "data_size": 63488 00:14:51.144 }, 00:14:51.144 { 00:14:51.144 "name": "BaseBdev4", 00:14:51.144 "uuid": "7baa9661-c7fe-53ac-a89c-1e55b90ce0fc", 00:14:51.144 "is_configured": true, 00:14:51.144 "data_offset": 2048, 00:14:51.144 "data_size": 63488 00:14:51.144 } 00:14:51.144 ] 00:14:51.144 }' 00:14:51.144 18:45:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:51.144 18:45:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:51.144 18:45:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:51.144 18:45:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:51.144 18:45:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:14:51.144 18:45:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.144 18:45:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.144 18:45:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.144 18:45:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:51.144 18:45:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.144 18:45:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.144 [2024-12-15 18:45:51.427532] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:51.144 [2024-12-15 18:45:51.427636] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:51.144 [2024-12-15 18:45:51.427671] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:14:51.144 [2024-12-15 18:45:51.427700] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:51.144 [2024-12-15 18:45:51.428145] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:51.144 [2024-12-15 18:45:51.428173] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:51.144 [2024-12-15 18:45:51.428240] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:14:51.144 [2024-12-15 18:45:51.428268] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:14:51.144 [2024-12-15 18:45:51.428285] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:51.144 [2024-12-15 18:45:51.428296] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:14:51.144 BaseBdev1 00:14:51.144 18:45:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.144 18:45:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:14:52.084 18:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:52.084 18:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:52.084 18:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:52.084 18:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:52.084 18:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:52.084 18:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:52.084 18:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:52.084 18:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:52.084 18:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:52.084 18:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:52.084 18:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.084 18:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:52.084 18:45:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.084 18:45:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.084 18:45:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.084 18:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:52.084 "name": "raid_bdev1", 00:14:52.084 "uuid": "2d0da511-ddcd-40b6-8c57-17628ac40775", 00:14:52.084 "strip_size_kb": 64, 00:14:52.084 "state": "online", 00:14:52.084 "raid_level": "raid5f", 00:14:52.084 "superblock": true, 00:14:52.084 "num_base_bdevs": 4, 00:14:52.084 "num_base_bdevs_discovered": 3, 00:14:52.084 "num_base_bdevs_operational": 3, 00:14:52.084 "base_bdevs_list": [ 00:14:52.084 { 00:14:52.084 "name": null, 00:14:52.084 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:52.084 "is_configured": false, 00:14:52.084 "data_offset": 0, 00:14:52.084 "data_size": 63488 00:14:52.084 }, 00:14:52.084 { 00:14:52.084 "name": "BaseBdev2", 00:14:52.084 "uuid": "a54e122f-adcc-5f87-936c-aeee26aa6adb", 00:14:52.084 "is_configured": true, 00:14:52.084 "data_offset": 2048, 00:14:52.084 "data_size": 63488 00:14:52.084 }, 00:14:52.084 { 00:14:52.084 "name": "BaseBdev3", 00:14:52.084 "uuid": "24778cdb-f218-565e-a3a6-f38fa64a0a79", 00:14:52.084 "is_configured": true, 00:14:52.084 "data_offset": 2048, 00:14:52.084 "data_size": 63488 00:14:52.084 }, 00:14:52.084 { 00:14:52.084 "name": "BaseBdev4", 00:14:52.084 "uuid": "7baa9661-c7fe-53ac-a89c-1e55b90ce0fc", 00:14:52.084 "is_configured": true, 00:14:52.084 "data_offset": 2048, 00:14:52.084 "data_size": 63488 00:14:52.084 } 00:14:52.084 ] 00:14:52.084 }' 00:14:52.084 18:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:52.084 18:45:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.654 18:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:52.654 18:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:52.654 18:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:52.654 18:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:52.654 18:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:52.654 18:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:52.654 18:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.654 18:45:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.654 18:45:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.654 18:45:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.654 18:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:52.654 "name": "raid_bdev1", 00:14:52.654 "uuid": "2d0da511-ddcd-40b6-8c57-17628ac40775", 00:14:52.654 "strip_size_kb": 64, 00:14:52.654 "state": "online", 00:14:52.654 "raid_level": "raid5f", 00:14:52.654 "superblock": true, 00:14:52.654 "num_base_bdevs": 4, 00:14:52.654 "num_base_bdevs_discovered": 3, 00:14:52.654 "num_base_bdevs_operational": 3, 00:14:52.654 "base_bdevs_list": [ 00:14:52.654 { 00:14:52.654 "name": null, 00:14:52.654 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:52.654 "is_configured": false, 00:14:52.654 "data_offset": 0, 00:14:52.654 "data_size": 63488 00:14:52.654 }, 00:14:52.654 { 00:14:52.654 "name": "BaseBdev2", 00:14:52.654 "uuid": "a54e122f-adcc-5f87-936c-aeee26aa6adb", 00:14:52.654 "is_configured": true, 00:14:52.654 "data_offset": 2048, 00:14:52.654 "data_size": 63488 00:14:52.654 }, 00:14:52.654 { 00:14:52.654 "name": "BaseBdev3", 00:14:52.654 "uuid": "24778cdb-f218-565e-a3a6-f38fa64a0a79", 00:14:52.654 "is_configured": true, 00:14:52.654 "data_offset": 2048, 00:14:52.654 "data_size": 63488 00:14:52.654 }, 00:14:52.654 { 00:14:52.654 "name": "BaseBdev4", 00:14:52.654 "uuid": "7baa9661-c7fe-53ac-a89c-1e55b90ce0fc", 00:14:52.654 "is_configured": true, 00:14:52.654 "data_offset": 2048, 00:14:52.654 "data_size": 63488 00:14:52.654 } 00:14:52.654 ] 00:14:52.654 }' 00:14:52.654 18:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:52.654 18:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:52.654 18:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:52.654 18:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:52.654 18:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:52.654 18:45:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:14:52.654 18:45:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:52.654 18:45:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:14:52.654 18:45:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:52.654 18:45:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:14:52.654 18:45:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:52.654 18:45:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:52.654 18:45:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.654 18:45:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.654 [2024-12-15 18:45:53.004854] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:52.654 [2024-12-15 18:45:53.005004] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:14:52.654 [2024-12-15 18:45:53.005020] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:52.654 request: 00:14:52.654 { 00:14:52.654 "base_bdev": "BaseBdev1", 00:14:52.654 "raid_bdev": "raid_bdev1", 00:14:52.654 "method": "bdev_raid_add_base_bdev", 00:14:52.654 "req_id": 1 00:14:52.654 } 00:14:52.654 Got JSON-RPC error response 00:14:52.654 response: 00:14:52.654 { 00:14:52.654 "code": -22, 00:14:52.654 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:14:52.654 } 00:14:52.654 18:45:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:14:52.654 18:45:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:14:52.654 18:45:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:52.654 18:45:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:52.654 18:45:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:52.654 18:45:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:14:53.593 18:45:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:53.593 18:45:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:53.593 18:45:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:53.593 18:45:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:53.593 18:45:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:53.593 18:45:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:53.593 18:45:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:53.593 18:45:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:53.593 18:45:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:53.593 18:45:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:53.593 18:45:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.593 18:45:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.593 18:45:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:53.593 18:45:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.853 18:45:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.853 18:45:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:53.853 "name": "raid_bdev1", 00:14:53.853 "uuid": "2d0da511-ddcd-40b6-8c57-17628ac40775", 00:14:53.853 "strip_size_kb": 64, 00:14:53.853 "state": "online", 00:14:53.853 "raid_level": "raid5f", 00:14:53.853 "superblock": true, 00:14:53.853 "num_base_bdevs": 4, 00:14:53.853 "num_base_bdevs_discovered": 3, 00:14:53.853 "num_base_bdevs_operational": 3, 00:14:53.853 "base_bdevs_list": [ 00:14:53.853 { 00:14:53.853 "name": null, 00:14:53.853 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:53.853 "is_configured": false, 00:14:53.853 "data_offset": 0, 00:14:53.853 "data_size": 63488 00:14:53.853 }, 00:14:53.853 { 00:14:53.853 "name": "BaseBdev2", 00:14:53.853 "uuid": "a54e122f-adcc-5f87-936c-aeee26aa6adb", 00:14:53.853 "is_configured": true, 00:14:53.853 "data_offset": 2048, 00:14:53.853 "data_size": 63488 00:14:53.853 }, 00:14:53.853 { 00:14:53.853 "name": "BaseBdev3", 00:14:53.853 "uuid": "24778cdb-f218-565e-a3a6-f38fa64a0a79", 00:14:53.853 "is_configured": true, 00:14:53.853 "data_offset": 2048, 00:14:53.853 "data_size": 63488 00:14:53.853 }, 00:14:53.853 { 00:14:53.853 "name": "BaseBdev4", 00:14:53.853 "uuid": "7baa9661-c7fe-53ac-a89c-1e55b90ce0fc", 00:14:53.853 "is_configured": true, 00:14:53.853 "data_offset": 2048, 00:14:53.853 "data_size": 63488 00:14:53.853 } 00:14:53.853 ] 00:14:53.853 }' 00:14:53.853 18:45:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:53.853 18:45:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.114 18:45:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:54.114 18:45:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:54.114 18:45:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:54.114 18:45:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:54.114 18:45:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:54.114 18:45:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:54.114 18:45:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.114 18:45:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.114 18:45:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.114 18:45:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.114 18:45:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:54.114 "name": "raid_bdev1", 00:14:54.114 "uuid": "2d0da511-ddcd-40b6-8c57-17628ac40775", 00:14:54.114 "strip_size_kb": 64, 00:14:54.114 "state": "online", 00:14:54.114 "raid_level": "raid5f", 00:14:54.114 "superblock": true, 00:14:54.114 "num_base_bdevs": 4, 00:14:54.114 "num_base_bdevs_discovered": 3, 00:14:54.114 "num_base_bdevs_operational": 3, 00:14:54.114 "base_bdevs_list": [ 00:14:54.114 { 00:14:54.114 "name": null, 00:14:54.114 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:54.114 "is_configured": false, 00:14:54.114 "data_offset": 0, 00:14:54.114 "data_size": 63488 00:14:54.114 }, 00:14:54.114 { 00:14:54.114 "name": "BaseBdev2", 00:14:54.114 "uuid": "a54e122f-adcc-5f87-936c-aeee26aa6adb", 00:14:54.114 "is_configured": true, 00:14:54.114 "data_offset": 2048, 00:14:54.114 "data_size": 63488 00:14:54.114 }, 00:14:54.114 { 00:14:54.114 "name": "BaseBdev3", 00:14:54.114 "uuid": "24778cdb-f218-565e-a3a6-f38fa64a0a79", 00:14:54.114 "is_configured": true, 00:14:54.114 "data_offset": 2048, 00:14:54.114 "data_size": 63488 00:14:54.114 }, 00:14:54.114 { 00:14:54.114 "name": "BaseBdev4", 00:14:54.114 "uuid": "7baa9661-c7fe-53ac-a89c-1e55b90ce0fc", 00:14:54.114 "is_configured": true, 00:14:54.114 "data_offset": 2048, 00:14:54.114 "data_size": 63488 00:14:54.114 } 00:14:54.114 ] 00:14:54.114 }' 00:14:54.114 18:45:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:54.114 18:45:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:54.114 18:45:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:54.374 18:45:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:54.374 18:45:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 97436 00:14:54.374 18:45:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 97436 ']' 00:14:54.374 18:45:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 97436 00:14:54.374 18:45:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:14:54.374 18:45:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:54.374 18:45:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 97436 00:14:54.374 18:45:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:54.374 18:45:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:54.374 18:45:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 97436' 00:14:54.374 killing process with pid 97436 00:14:54.374 18:45:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 97436 00:14:54.374 Received shutdown signal, test time was about 60.000000 seconds 00:14:54.374 00:14:54.374 Latency(us) 00:14:54.374 [2024-12-15T18:45:54.815Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:54.374 [2024-12-15T18:45:54.815Z] =================================================================================================================== 00:14:54.374 [2024-12-15T18:45:54.815Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:54.374 [2024-12-15 18:45:54.608917] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:54.374 [2024-12-15 18:45:54.609029] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:54.374 18:45:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 97436 00:14:54.374 [2024-12-15 18:45:54.609105] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:54.374 [2024-12-15 18:45:54.609114] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state offline 00:14:54.374 [2024-12-15 18:45:54.660327] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:54.633 18:45:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:14:54.633 00:14:54.633 real 0m25.214s 00:14:54.633 user 0m31.785s 00:14:54.633 sys 0m3.318s 00:14:54.633 18:45:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:54.633 18:45:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.633 ************************************ 00:14:54.633 END TEST raid5f_rebuild_test_sb 00:14:54.633 ************************************ 00:14:54.633 18:45:54 bdev_raid -- bdev/bdev_raid.sh@995 -- # base_blocklen=4096 00:14:54.633 18:45:54 bdev_raid -- bdev/bdev_raid.sh@997 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:14:54.633 18:45:54 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:54.633 18:45:54 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:54.633 18:45:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:54.633 ************************************ 00:14:54.633 START TEST raid_state_function_test_sb_4k 00:14:54.633 ************************************ 00:14:54.633 18:45:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:14:54.633 18:45:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:14:54.633 18:45:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:14:54.633 18:45:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:14:54.633 18:45:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:54.633 18:45:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:54.633 18:45:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:54.633 18:45:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:54.633 18:45:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:54.633 18:45:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:54.633 18:45:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:54.633 18:45:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:54.633 18:45:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:54.633 18:45:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:54.633 18:45:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:54.633 18:45:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:54.633 18:45:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:54.633 18:45:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:54.633 18:45:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:54.633 18:45:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:14:54.633 18:45:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:14:54.633 18:45:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:14:54.633 18:45:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:14:54.633 18:45:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@229 -- # raid_pid=98230 00:14:54.633 18:45:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:54.633 18:45:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 98230' 00:14:54.633 Process raid pid: 98230 00:14:54.633 18:45:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@231 -- # waitforlisten 98230 00:14:54.633 18:45:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 98230 ']' 00:14:54.633 18:45:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:54.633 18:45:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:54.633 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:54.633 18:45:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:54.633 18:45:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:54.633 18:45:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:54.633 [2024-12-15 18:45:55.039994] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:14:54.633 [2024-12-15 18:45:55.040143] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:54.892 [2024-12-15 18:45:55.211882] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:54.892 [2024-12-15 18:45:55.237539] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:14:54.892 [2024-12-15 18:45:55.280838] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:54.892 [2024-12-15 18:45:55.280872] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:55.461 18:45:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:55.461 18:45:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:14:55.461 18:45:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:14:55.461 18:45:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.461 18:45:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:55.461 [2024-12-15 18:45:55.872226] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:55.461 [2024-12-15 18:45:55.872287] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:55.461 [2024-12-15 18:45:55.872296] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:55.461 [2024-12-15 18:45:55.872305] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:55.461 18:45:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.461 18:45:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:14:55.461 18:45:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:55.461 18:45:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:55.461 18:45:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:55.461 18:45:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:55.461 18:45:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:55.461 18:45:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:55.461 18:45:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:55.461 18:45:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:55.461 18:45:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:55.461 18:45:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.461 18:45:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.461 18:45:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:55.461 18:45:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:55.461 18:45:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.721 18:45:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:55.721 "name": "Existed_Raid", 00:14:55.721 "uuid": "4eccb44b-1e3b-41d7-a51b-76b03ff4de65", 00:14:55.721 "strip_size_kb": 0, 00:14:55.721 "state": "configuring", 00:14:55.721 "raid_level": "raid1", 00:14:55.721 "superblock": true, 00:14:55.721 "num_base_bdevs": 2, 00:14:55.721 "num_base_bdevs_discovered": 0, 00:14:55.721 "num_base_bdevs_operational": 2, 00:14:55.721 "base_bdevs_list": [ 00:14:55.721 { 00:14:55.721 "name": "BaseBdev1", 00:14:55.721 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:55.721 "is_configured": false, 00:14:55.721 "data_offset": 0, 00:14:55.721 "data_size": 0 00:14:55.721 }, 00:14:55.721 { 00:14:55.721 "name": "BaseBdev2", 00:14:55.721 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:55.721 "is_configured": false, 00:14:55.721 "data_offset": 0, 00:14:55.721 "data_size": 0 00:14:55.721 } 00:14:55.721 ] 00:14:55.721 }' 00:14:55.721 18:45:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:55.721 18:45:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:55.981 18:45:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:55.981 18:45:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.981 18:45:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:55.981 [2024-12-15 18:45:56.291424] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:55.981 [2024-12-15 18:45:56.291529] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:14:55.981 18:45:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.981 18:45:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:14:55.981 18:45:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.981 18:45:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:55.981 [2024-12-15 18:45:56.303395] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:55.981 [2024-12-15 18:45:56.303477] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:55.981 [2024-12-15 18:45:56.303504] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:55.981 [2024-12-15 18:45:56.303526] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:55.981 18:45:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.981 18:45:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1 00:14:55.981 18:45:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.981 18:45:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:55.981 [2024-12-15 18:45:56.324420] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:55.981 BaseBdev1 00:14:55.981 18:45:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.981 18:45:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:55.981 18:45:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:55.981 18:45:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:55.981 18:45:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:14:55.981 18:45:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:55.981 18:45:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:55.981 18:45:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:55.981 18:45:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.981 18:45:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:55.981 18:45:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.981 18:45:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:55.981 18:45:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.981 18:45:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:55.982 [ 00:14:55.982 { 00:14:55.982 "name": "BaseBdev1", 00:14:55.982 "aliases": [ 00:14:55.982 "8e684633-e698-4f68-ad86-e88544eb92ac" 00:14:55.982 ], 00:14:55.982 "product_name": "Malloc disk", 00:14:55.982 "block_size": 4096, 00:14:55.982 "num_blocks": 8192, 00:14:55.982 "uuid": "8e684633-e698-4f68-ad86-e88544eb92ac", 00:14:55.982 "assigned_rate_limits": { 00:14:55.982 "rw_ios_per_sec": 0, 00:14:55.982 "rw_mbytes_per_sec": 0, 00:14:55.982 "r_mbytes_per_sec": 0, 00:14:55.982 "w_mbytes_per_sec": 0 00:14:55.982 }, 00:14:55.982 "claimed": true, 00:14:55.982 "claim_type": "exclusive_write", 00:14:55.982 "zoned": false, 00:14:55.982 "supported_io_types": { 00:14:55.982 "read": true, 00:14:55.982 "write": true, 00:14:55.982 "unmap": true, 00:14:55.982 "flush": true, 00:14:55.982 "reset": true, 00:14:55.982 "nvme_admin": false, 00:14:55.982 "nvme_io": false, 00:14:55.982 "nvme_io_md": false, 00:14:55.982 "write_zeroes": true, 00:14:55.982 "zcopy": true, 00:14:55.982 "get_zone_info": false, 00:14:55.982 "zone_management": false, 00:14:55.982 "zone_append": false, 00:14:55.982 "compare": false, 00:14:55.982 "compare_and_write": false, 00:14:55.982 "abort": true, 00:14:55.982 "seek_hole": false, 00:14:55.982 "seek_data": false, 00:14:55.982 "copy": true, 00:14:55.982 "nvme_iov_md": false 00:14:55.982 }, 00:14:55.982 "memory_domains": [ 00:14:55.982 { 00:14:55.982 "dma_device_id": "system", 00:14:55.982 "dma_device_type": 1 00:14:55.982 }, 00:14:55.982 { 00:14:55.982 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:55.982 "dma_device_type": 2 00:14:55.982 } 00:14:55.982 ], 00:14:55.982 "driver_specific": {} 00:14:55.982 } 00:14:55.982 ] 00:14:55.982 18:45:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.982 18:45:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:14:55.982 18:45:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:14:55.982 18:45:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:55.982 18:45:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:55.982 18:45:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:55.982 18:45:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:55.982 18:45:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:55.982 18:45:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:55.982 18:45:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:55.982 18:45:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:55.982 18:45:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:55.982 18:45:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.982 18:45:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:55.982 18:45:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.982 18:45:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:55.982 18:45:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.982 18:45:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:55.982 "name": "Existed_Raid", 00:14:55.982 "uuid": "3af6c434-64a4-437c-95f0-bb4c65d99d83", 00:14:55.982 "strip_size_kb": 0, 00:14:55.982 "state": "configuring", 00:14:55.982 "raid_level": "raid1", 00:14:55.982 "superblock": true, 00:14:55.982 "num_base_bdevs": 2, 00:14:55.982 "num_base_bdevs_discovered": 1, 00:14:55.982 "num_base_bdevs_operational": 2, 00:14:55.982 "base_bdevs_list": [ 00:14:55.982 { 00:14:55.982 "name": "BaseBdev1", 00:14:55.982 "uuid": "8e684633-e698-4f68-ad86-e88544eb92ac", 00:14:55.982 "is_configured": true, 00:14:55.982 "data_offset": 256, 00:14:55.982 "data_size": 7936 00:14:55.982 }, 00:14:55.982 { 00:14:55.982 "name": "BaseBdev2", 00:14:55.982 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:55.982 "is_configured": false, 00:14:55.982 "data_offset": 0, 00:14:55.982 "data_size": 0 00:14:55.982 } 00:14:55.982 ] 00:14:55.982 }' 00:14:55.982 18:45:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:55.982 18:45:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:56.552 18:45:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:56.552 18:45:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.552 18:45:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:56.552 [2024-12-15 18:45:56.767698] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:56.552 [2024-12-15 18:45:56.767752] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:14:56.552 18:45:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.552 18:45:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:14:56.552 18:45:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.552 18:45:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:56.552 [2024-12-15 18:45:56.779711] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:56.552 [2024-12-15 18:45:56.781654] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:56.552 [2024-12-15 18:45:56.781701] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:56.552 18:45:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.552 18:45:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:56.552 18:45:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:56.552 18:45:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:14:56.552 18:45:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:56.552 18:45:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:56.552 18:45:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:56.552 18:45:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:56.553 18:45:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:56.553 18:45:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:56.553 18:45:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:56.553 18:45:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:56.553 18:45:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:56.553 18:45:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.553 18:45:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.553 18:45:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:56.553 18:45:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:56.553 18:45:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.553 18:45:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:56.553 "name": "Existed_Raid", 00:14:56.553 "uuid": "24611da0-d3c0-4176-ae27-b1ead172850d", 00:14:56.553 "strip_size_kb": 0, 00:14:56.553 "state": "configuring", 00:14:56.553 "raid_level": "raid1", 00:14:56.553 "superblock": true, 00:14:56.553 "num_base_bdevs": 2, 00:14:56.553 "num_base_bdevs_discovered": 1, 00:14:56.553 "num_base_bdevs_operational": 2, 00:14:56.553 "base_bdevs_list": [ 00:14:56.553 { 00:14:56.553 "name": "BaseBdev1", 00:14:56.553 "uuid": "8e684633-e698-4f68-ad86-e88544eb92ac", 00:14:56.553 "is_configured": true, 00:14:56.553 "data_offset": 256, 00:14:56.553 "data_size": 7936 00:14:56.553 }, 00:14:56.553 { 00:14:56.553 "name": "BaseBdev2", 00:14:56.553 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:56.553 "is_configured": false, 00:14:56.553 "data_offset": 0, 00:14:56.553 "data_size": 0 00:14:56.553 } 00:14:56.553 ] 00:14:56.553 }' 00:14:56.553 18:45:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:56.553 18:45:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:56.813 18:45:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2 00:14:56.813 18:45:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.813 18:45:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:57.073 [2024-12-15 18:45:57.254082] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:57.073 [2024-12-15 18:45:57.254368] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:14:57.073 [2024-12-15 18:45:57.254419] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:14:57.073 BaseBdev2 00:14:57.073 [2024-12-15 18:45:57.254705] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:14:57.073 [2024-12-15 18:45:57.254904] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:14:57.073 [2024-12-15 18:45:57.254954] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:14:57.073 [2024-12-15 18:45:57.255106] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:57.073 18:45:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.073 18:45:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:57.073 18:45:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:57.073 18:45:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:57.073 18:45:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:14:57.073 18:45:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:57.073 18:45:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:57.073 18:45:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:57.073 18:45:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.073 18:45:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:57.073 18:45:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.073 18:45:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:57.073 18:45:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.073 18:45:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:57.073 [ 00:14:57.073 { 00:14:57.073 "name": "BaseBdev2", 00:14:57.073 "aliases": [ 00:14:57.073 "13e7145e-abbc-4793-b7a8-dea211ee25de" 00:14:57.073 ], 00:14:57.073 "product_name": "Malloc disk", 00:14:57.073 "block_size": 4096, 00:14:57.073 "num_blocks": 8192, 00:14:57.073 "uuid": "13e7145e-abbc-4793-b7a8-dea211ee25de", 00:14:57.073 "assigned_rate_limits": { 00:14:57.073 "rw_ios_per_sec": 0, 00:14:57.073 "rw_mbytes_per_sec": 0, 00:14:57.073 "r_mbytes_per_sec": 0, 00:14:57.073 "w_mbytes_per_sec": 0 00:14:57.073 }, 00:14:57.073 "claimed": true, 00:14:57.073 "claim_type": "exclusive_write", 00:14:57.073 "zoned": false, 00:14:57.073 "supported_io_types": { 00:14:57.073 "read": true, 00:14:57.073 "write": true, 00:14:57.073 "unmap": true, 00:14:57.073 "flush": true, 00:14:57.073 "reset": true, 00:14:57.073 "nvme_admin": false, 00:14:57.073 "nvme_io": false, 00:14:57.073 "nvme_io_md": false, 00:14:57.073 "write_zeroes": true, 00:14:57.073 "zcopy": true, 00:14:57.073 "get_zone_info": false, 00:14:57.073 "zone_management": false, 00:14:57.073 "zone_append": false, 00:14:57.073 "compare": false, 00:14:57.073 "compare_and_write": false, 00:14:57.073 "abort": true, 00:14:57.073 "seek_hole": false, 00:14:57.073 "seek_data": false, 00:14:57.073 "copy": true, 00:14:57.073 "nvme_iov_md": false 00:14:57.073 }, 00:14:57.073 "memory_domains": [ 00:14:57.073 { 00:14:57.073 "dma_device_id": "system", 00:14:57.073 "dma_device_type": 1 00:14:57.073 }, 00:14:57.073 { 00:14:57.073 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:57.073 "dma_device_type": 2 00:14:57.073 } 00:14:57.073 ], 00:14:57.073 "driver_specific": {} 00:14:57.073 } 00:14:57.073 ] 00:14:57.073 18:45:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.073 18:45:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:14:57.073 18:45:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:57.073 18:45:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:57.073 18:45:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:14:57.073 18:45:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:57.073 18:45:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:57.073 18:45:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:57.073 18:45:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:57.073 18:45:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:57.073 18:45:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:57.073 18:45:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:57.073 18:45:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:57.073 18:45:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:57.073 18:45:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.073 18:45:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:57.073 18:45:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.073 18:45:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:57.073 18:45:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.073 18:45:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:57.073 "name": "Existed_Raid", 00:14:57.073 "uuid": "24611da0-d3c0-4176-ae27-b1ead172850d", 00:14:57.073 "strip_size_kb": 0, 00:14:57.073 "state": "online", 00:14:57.073 "raid_level": "raid1", 00:14:57.073 "superblock": true, 00:14:57.073 "num_base_bdevs": 2, 00:14:57.073 "num_base_bdevs_discovered": 2, 00:14:57.073 "num_base_bdevs_operational": 2, 00:14:57.073 "base_bdevs_list": [ 00:14:57.073 { 00:14:57.073 "name": "BaseBdev1", 00:14:57.073 "uuid": "8e684633-e698-4f68-ad86-e88544eb92ac", 00:14:57.073 "is_configured": true, 00:14:57.073 "data_offset": 256, 00:14:57.073 "data_size": 7936 00:14:57.073 }, 00:14:57.073 { 00:14:57.073 "name": "BaseBdev2", 00:14:57.073 "uuid": "13e7145e-abbc-4793-b7a8-dea211ee25de", 00:14:57.073 "is_configured": true, 00:14:57.073 "data_offset": 256, 00:14:57.073 "data_size": 7936 00:14:57.073 } 00:14:57.073 ] 00:14:57.073 }' 00:14:57.073 18:45:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:57.073 18:45:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:57.644 18:45:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:57.644 18:45:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:57.644 18:45:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:57.644 18:45:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:57.644 18:45:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local name 00:14:57.644 18:45:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:57.644 18:45:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:57.644 18:45:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:57.644 18:45:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.644 18:45:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:57.644 [2024-12-15 18:45:57.789564] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:57.644 18:45:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.644 18:45:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:57.644 "name": "Existed_Raid", 00:14:57.644 "aliases": [ 00:14:57.644 "24611da0-d3c0-4176-ae27-b1ead172850d" 00:14:57.644 ], 00:14:57.644 "product_name": "Raid Volume", 00:14:57.644 "block_size": 4096, 00:14:57.644 "num_blocks": 7936, 00:14:57.644 "uuid": "24611da0-d3c0-4176-ae27-b1ead172850d", 00:14:57.644 "assigned_rate_limits": { 00:14:57.644 "rw_ios_per_sec": 0, 00:14:57.644 "rw_mbytes_per_sec": 0, 00:14:57.644 "r_mbytes_per_sec": 0, 00:14:57.644 "w_mbytes_per_sec": 0 00:14:57.644 }, 00:14:57.644 "claimed": false, 00:14:57.644 "zoned": false, 00:14:57.644 "supported_io_types": { 00:14:57.644 "read": true, 00:14:57.644 "write": true, 00:14:57.644 "unmap": false, 00:14:57.644 "flush": false, 00:14:57.644 "reset": true, 00:14:57.645 "nvme_admin": false, 00:14:57.645 "nvme_io": false, 00:14:57.645 "nvme_io_md": false, 00:14:57.645 "write_zeroes": true, 00:14:57.645 "zcopy": false, 00:14:57.645 "get_zone_info": false, 00:14:57.645 "zone_management": false, 00:14:57.645 "zone_append": false, 00:14:57.645 "compare": false, 00:14:57.645 "compare_and_write": false, 00:14:57.645 "abort": false, 00:14:57.645 "seek_hole": false, 00:14:57.645 "seek_data": false, 00:14:57.645 "copy": false, 00:14:57.645 "nvme_iov_md": false 00:14:57.645 }, 00:14:57.645 "memory_domains": [ 00:14:57.645 { 00:14:57.645 "dma_device_id": "system", 00:14:57.645 "dma_device_type": 1 00:14:57.645 }, 00:14:57.645 { 00:14:57.645 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:57.645 "dma_device_type": 2 00:14:57.645 }, 00:14:57.645 { 00:14:57.645 "dma_device_id": "system", 00:14:57.645 "dma_device_type": 1 00:14:57.645 }, 00:14:57.645 { 00:14:57.645 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:57.645 "dma_device_type": 2 00:14:57.645 } 00:14:57.645 ], 00:14:57.645 "driver_specific": { 00:14:57.645 "raid": { 00:14:57.645 "uuid": "24611da0-d3c0-4176-ae27-b1ead172850d", 00:14:57.645 "strip_size_kb": 0, 00:14:57.645 "state": "online", 00:14:57.645 "raid_level": "raid1", 00:14:57.645 "superblock": true, 00:14:57.645 "num_base_bdevs": 2, 00:14:57.645 "num_base_bdevs_discovered": 2, 00:14:57.645 "num_base_bdevs_operational": 2, 00:14:57.645 "base_bdevs_list": [ 00:14:57.645 { 00:14:57.645 "name": "BaseBdev1", 00:14:57.645 "uuid": "8e684633-e698-4f68-ad86-e88544eb92ac", 00:14:57.645 "is_configured": true, 00:14:57.645 "data_offset": 256, 00:14:57.645 "data_size": 7936 00:14:57.645 }, 00:14:57.645 { 00:14:57.645 "name": "BaseBdev2", 00:14:57.645 "uuid": "13e7145e-abbc-4793-b7a8-dea211ee25de", 00:14:57.645 "is_configured": true, 00:14:57.645 "data_offset": 256, 00:14:57.645 "data_size": 7936 00:14:57.645 } 00:14:57.645 ] 00:14:57.645 } 00:14:57.645 } 00:14:57.645 }' 00:14:57.645 18:45:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:57.645 18:45:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:57.645 BaseBdev2' 00:14:57.645 18:45:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:57.645 18:45:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:14:57.645 18:45:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:57.645 18:45:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:57.645 18:45:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:57.645 18:45:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.645 18:45:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:57.645 18:45:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.645 18:45:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:14:57.645 18:45:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:14:57.645 18:45:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:57.645 18:45:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:57.645 18:45:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:57.645 18:45:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.645 18:45:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:57.645 18:45:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.645 18:45:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:14:57.645 18:45:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:14:57.645 18:45:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:57.645 18:45:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.645 18:45:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:57.645 [2024-12-15 18:45:58.028951] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:57.645 18:45:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.645 18:45:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:57.645 18:45:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:14:57.645 18:45:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:57.645 18:45:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:14:57.645 18:45:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:14:57.645 18:45:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:14:57.645 18:45:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:57.645 18:45:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:57.645 18:45:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:57.645 18:45:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:57.645 18:45:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:57.645 18:45:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:57.645 18:45:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:57.645 18:45:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:57.645 18:45:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:57.645 18:45:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.645 18:45:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:57.645 18:45:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.645 18:45:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:57.645 18:45:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.905 18:45:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:57.905 "name": "Existed_Raid", 00:14:57.905 "uuid": "24611da0-d3c0-4176-ae27-b1ead172850d", 00:14:57.905 "strip_size_kb": 0, 00:14:57.905 "state": "online", 00:14:57.905 "raid_level": "raid1", 00:14:57.905 "superblock": true, 00:14:57.905 "num_base_bdevs": 2, 00:14:57.905 "num_base_bdevs_discovered": 1, 00:14:57.905 "num_base_bdevs_operational": 1, 00:14:57.905 "base_bdevs_list": [ 00:14:57.905 { 00:14:57.905 "name": null, 00:14:57.905 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:57.905 "is_configured": false, 00:14:57.905 "data_offset": 0, 00:14:57.905 "data_size": 7936 00:14:57.905 }, 00:14:57.905 { 00:14:57.905 "name": "BaseBdev2", 00:14:57.905 "uuid": "13e7145e-abbc-4793-b7a8-dea211ee25de", 00:14:57.905 "is_configured": true, 00:14:57.905 "data_offset": 256, 00:14:57.905 "data_size": 7936 00:14:57.905 } 00:14:57.905 ] 00:14:57.905 }' 00:14:57.905 18:45:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:57.905 18:45:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:58.165 18:45:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:58.165 18:45:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:58.165 18:45:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.165 18:45:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:58.165 18:45:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.165 18:45:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:58.165 18:45:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.165 18:45:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:58.165 18:45:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:58.166 18:45:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:58.166 18:45:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.166 18:45:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:58.166 [2024-12-15 18:45:58.547572] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:58.166 [2024-12-15 18:45:58.547681] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:58.166 [2024-12-15 18:45:58.559331] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:58.166 [2024-12-15 18:45:58.559448] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:58.166 [2024-12-15 18:45:58.559489] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:14:58.166 18:45:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.166 18:45:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:58.166 18:45:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:58.166 18:45:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.166 18:45:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.166 18:45:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:58.166 18:45:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:58.166 18:45:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.426 18:45:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:58.426 18:45:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:58.426 18:45:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:14:58.426 18:45:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@326 -- # killprocess 98230 00:14:58.426 18:45:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 98230 ']' 00:14:58.426 18:45:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 98230 00:14:58.426 18:45:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:14:58.426 18:45:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:58.426 18:45:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 98230 00:14:58.426 18:45:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:58.426 18:45:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:58.426 18:45:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 98230' 00:14:58.426 killing process with pid 98230 00:14:58.426 18:45:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@973 -- # kill 98230 00:14:58.426 [2024-12-15 18:45:58.658729] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:58.426 18:45:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@978 -- # wait 98230 00:14:58.426 [2024-12-15 18:45:58.659768] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:58.686 18:45:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@328 -- # return 0 00:14:58.686 00:14:58.686 real 0m3.936s 00:14:58.686 user 0m6.191s 00:14:58.686 sys 0m0.862s 00:14:58.686 18:45:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:58.686 18:45:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:58.686 ************************************ 00:14:58.686 END TEST raid_state_function_test_sb_4k 00:14:58.686 ************************************ 00:14:58.686 18:45:58 bdev_raid -- bdev/bdev_raid.sh@998 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:14:58.686 18:45:58 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:14:58.686 18:45:58 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:58.686 18:45:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:58.686 ************************************ 00:14:58.686 START TEST raid_superblock_test_4k 00:14:58.686 ************************************ 00:14:58.686 18:45:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:14:58.686 18:45:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:14:58.686 18:45:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:14:58.686 18:45:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:14:58.686 18:45:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:14:58.686 18:45:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:14:58.686 18:45:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:14:58.686 18:45:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:14:58.686 18:45:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:14:58.686 18:45:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:14:58.686 18:45:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size 00:14:58.686 18:45:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:14:58.686 18:45:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:14:58.686 18:45:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:14:58.686 18:45:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:14:58.686 18:45:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:14:58.686 18:45:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # raid_pid=98472 00:14:58.686 18:45:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:14:58.686 18:45:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@413 -- # waitforlisten 98472 00:14:58.686 18:45:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@835 -- # '[' -z 98472 ']' 00:14:58.686 18:45:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:58.686 18:45:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:58.686 18:45:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:58.686 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:58.686 18:45:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:58.686 18:45:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:58.686 [2024-12-15 18:45:59.057397] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:14:58.686 [2024-12-15 18:45:59.057592] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98472 ] 00:14:58.946 [2024-12-15 18:45:59.213095] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:58.946 [2024-12-15 18:45:59.238861] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:14:58.946 [2024-12-15 18:45:59.282590] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:58.946 [2024-12-15 18:45:59.282633] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:59.517 18:45:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:59.517 18:45:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@868 -- # return 0 00:14:59.517 18:45:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:14:59.517 18:45:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:59.517 18:45:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:14:59.517 18:45:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:14:59.517 18:45:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:14:59.517 18:45:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:59.517 18:45:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:59.517 18:45:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:59.517 18:45:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc1 00:14:59.517 18:45:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.517 18:45:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:59.517 malloc1 00:14:59.517 18:45:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.517 18:45:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:59.517 18:45:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.517 18:45:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:59.517 [2024-12-15 18:45:59.898956] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:59.517 [2024-12-15 18:45:59.899072] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:59.517 [2024-12-15 18:45:59.899117] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:59.517 [2024-12-15 18:45:59.899152] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:59.517 [2024-12-15 18:45:59.901235] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:59.517 [2024-12-15 18:45:59.901310] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:59.517 pt1 00:14:59.517 18:45:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.517 18:45:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:59.517 18:45:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:59.517 18:45:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:14:59.517 18:45:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:14:59.517 18:45:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:14:59.517 18:45:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:59.517 18:45:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:59.517 18:45:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:59.517 18:45:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc2 00:14:59.517 18:45:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.517 18:45:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:59.517 malloc2 00:14:59.517 18:45:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.517 18:45:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:59.517 18:45:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.517 18:45:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:59.517 [2024-12-15 18:45:59.931609] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:59.517 [2024-12-15 18:45:59.931701] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:59.517 [2024-12-15 18:45:59.931750] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:59.517 [2024-12-15 18:45:59.931779] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:59.517 [2024-12-15 18:45:59.933824] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:59.517 [2024-12-15 18:45:59.933892] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:59.517 pt2 00:14:59.517 18:45:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.517 18:45:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:59.517 18:45:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:59.517 18:45:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:14:59.517 18:45:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.517 18:45:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:59.517 [2024-12-15 18:45:59.943614] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:59.517 [2024-12-15 18:45:59.945433] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:59.517 [2024-12-15 18:45:59.945569] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:14:59.517 [2024-12-15 18:45:59.945584] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:14:59.517 [2024-12-15 18:45:59.945870] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:14:59.517 [2024-12-15 18:45:59.946016] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:14:59.517 [2024-12-15 18:45:59.946026] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:14:59.517 [2024-12-15 18:45:59.946170] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:59.517 18:45:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.517 18:45:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:59.517 18:45:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:59.517 18:45:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:59.517 18:45:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:59.517 18:45:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:59.517 18:45:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:59.517 18:45:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:59.517 18:45:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:59.517 18:45:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:59.517 18:45:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:59.777 18:45:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.777 18:45:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:59.777 18:45:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.777 18:45:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:59.777 18:45:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.777 18:46:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:59.777 "name": "raid_bdev1", 00:14:59.777 "uuid": "b95cbb4a-f7c8-4b24-ada5-3a5123bdb178", 00:14:59.777 "strip_size_kb": 0, 00:14:59.777 "state": "online", 00:14:59.777 "raid_level": "raid1", 00:14:59.777 "superblock": true, 00:14:59.777 "num_base_bdevs": 2, 00:14:59.777 "num_base_bdevs_discovered": 2, 00:14:59.777 "num_base_bdevs_operational": 2, 00:14:59.777 "base_bdevs_list": [ 00:14:59.777 { 00:14:59.777 "name": "pt1", 00:14:59.777 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:59.777 "is_configured": true, 00:14:59.777 "data_offset": 256, 00:14:59.777 "data_size": 7936 00:14:59.777 }, 00:14:59.777 { 00:14:59.777 "name": "pt2", 00:14:59.777 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:59.777 "is_configured": true, 00:14:59.777 "data_offset": 256, 00:14:59.777 "data_size": 7936 00:14:59.777 } 00:14:59.777 ] 00:14:59.777 }' 00:14:59.777 18:46:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:59.777 18:46:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:00.037 18:46:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:15:00.037 18:46:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:00.037 18:46:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:00.037 18:46:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:00.037 18:46:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:15:00.037 18:46:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:00.037 18:46:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:00.037 18:46:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.037 18:46:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:00.037 18:46:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:00.037 [2024-12-15 18:46:00.411175] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:00.037 18:46:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.037 18:46:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:00.037 "name": "raid_bdev1", 00:15:00.037 "aliases": [ 00:15:00.037 "b95cbb4a-f7c8-4b24-ada5-3a5123bdb178" 00:15:00.037 ], 00:15:00.037 "product_name": "Raid Volume", 00:15:00.037 "block_size": 4096, 00:15:00.037 "num_blocks": 7936, 00:15:00.037 "uuid": "b95cbb4a-f7c8-4b24-ada5-3a5123bdb178", 00:15:00.037 "assigned_rate_limits": { 00:15:00.037 "rw_ios_per_sec": 0, 00:15:00.037 "rw_mbytes_per_sec": 0, 00:15:00.037 "r_mbytes_per_sec": 0, 00:15:00.037 "w_mbytes_per_sec": 0 00:15:00.037 }, 00:15:00.037 "claimed": false, 00:15:00.037 "zoned": false, 00:15:00.037 "supported_io_types": { 00:15:00.037 "read": true, 00:15:00.037 "write": true, 00:15:00.037 "unmap": false, 00:15:00.037 "flush": false, 00:15:00.037 "reset": true, 00:15:00.037 "nvme_admin": false, 00:15:00.037 "nvme_io": false, 00:15:00.037 "nvme_io_md": false, 00:15:00.037 "write_zeroes": true, 00:15:00.037 "zcopy": false, 00:15:00.037 "get_zone_info": false, 00:15:00.037 "zone_management": false, 00:15:00.037 "zone_append": false, 00:15:00.037 "compare": false, 00:15:00.037 "compare_and_write": false, 00:15:00.037 "abort": false, 00:15:00.037 "seek_hole": false, 00:15:00.037 "seek_data": false, 00:15:00.037 "copy": false, 00:15:00.037 "nvme_iov_md": false 00:15:00.037 }, 00:15:00.037 "memory_domains": [ 00:15:00.037 { 00:15:00.037 "dma_device_id": "system", 00:15:00.037 "dma_device_type": 1 00:15:00.037 }, 00:15:00.037 { 00:15:00.037 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:00.037 "dma_device_type": 2 00:15:00.037 }, 00:15:00.037 { 00:15:00.037 "dma_device_id": "system", 00:15:00.037 "dma_device_type": 1 00:15:00.037 }, 00:15:00.037 { 00:15:00.037 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:00.037 "dma_device_type": 2 00:15:00.037 } 00:15:00.037 ], 00:15:00.037 "driver_specific": { 00:15:00.037 "raid": { 00:15:00.037 "uuid": "b95cbb4a-f7c8-4b24-ada5-3a5123bdb178", 00:15:00.037 "strip_size_kb": 0, 00:15:00.037 "state": "online", 00:15:00.037 "raid_level": "raid1", 00:15:00.037 "superblock": true, 00:15:00.037 "num_base_bdevs": 2, 00:15:00.037 "num_base_bdevs_discovered": 2, 00:15:00.037 "num_base_bdevs_operational": 2, 00:15:00.037 "base_bdevs_list": [ 00:15:00.037 { 00:15:00.037 "name": "pt1", 00:15:00.037 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:00.037 "is_configured": true, 00:15:00.038 "data_offset": 256, 00:15:00.038 "data_size": 7936 00:15:00.038 }, 00:15:00.038 { 00:15:00.038 "name": "pt2", 00:15:00.038 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:00.038 "is_configured": true, 00:15:00.038 "data_offset": 256, 00:15:00.038 "data_size": 7936 00:15:00.038 } 00:15:00.038 ] 00:15:00.038 } 00:15:00.038 } 00:15:00.038 }' 00:15:00.038 18:46:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:00.298 18:46:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:00.298 pt2' 00:15:00.298 18:46:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:00.298 18:46:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:15:00.298 18:46:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:00.298 18:46:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:00.298 18:46:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.298 18:46:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:00.298 18:46:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:00.298 18:46:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.298 18:46:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:15:00.298 18:46:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:15:00.298 18:46:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:00.298 18:46:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:00.298 18:46:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:00.298 18:46:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.298 18:46:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:00.298 18:46:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.298 18:46:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:15:00.298 18:46:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:15:00.298 18:46:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:00.298 18:46:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.298 18:46:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:00.298 18:46:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:15:00.298 [2024-12-15 18:46:00.630695] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:00.298 18:46:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.298 18:46:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=b95cbb4a-f7c8-4b24-ada5-3a5123bdb178 00:15:00.298 18:46:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # '[' -z b95cbb4a-f7c8-4b24-ada5-3a5123bdb178 ']' 00:15:00.298 18:46:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:00.298 18:46:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.298 18:46:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:00.298 [2024-12-15 18:46:00.682374] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:00.298 [2024-12-15 18:46:00.682400] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:00.298 [2024-12-15 18:46:00.682485] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:00.298 [2024-12-15 18:46:00.682549] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:00.298 [2024-12-15 18:46:00.682558] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:15:00.298 18:46:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.298 18:46:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.298 18:46:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.298 18:46:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:00.298 18:46:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:15:00.298 18:46:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.559 18:46:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:15:00.559 18:46:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:15:00.559 18:46:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:00.559 18:46:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:15:00.559 18:46:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.559 18:46:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:00.559 18:46:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.559 18:46:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:00.559 18:46:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:15:00.559 18:46:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.559 18:46:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:00.559 18:46:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.559 18:46:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:15:00.559 18:46:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:00.559 18:46:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.559 18:46:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:00.559 18:46:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.559 18:46:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:15:00.559 18:46:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:15:00.559 18:46:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@652 -- # local es=0 00:15:00.559 18:46:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:15:00.559 18:46:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:00.559 18:46:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:00.559 18:46:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:00.559 18:46:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:00.559 18:46:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:15:00.559 18:46:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.559 18:46:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:00.559 [2024-12-15 18:46:00.826187] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:00.559 [2024-12-15 18:46:00.828001] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:00.559 [2024-12-15 18:46:00.828117] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:15:00.559 [2024-12-15 18:46:00.828167] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:15:00.559 [2024-12-15 18:46:00.828194] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:00.559 [2024-12-15 18:46:00.828210] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:15:00.559 request: 00:15:00.559 { 00:15:00.559 "name": "raid_bdev1", 00:15:00.559 "raid_level": "raid1", 00:15:00.559 "base_bdevs": [ 00:15:00.559 "malloc1", 00:15:00.559 "malloc2" 00:15:00.559 ], 00:15:00.559 "superblock": false, 00:15:00.559 "method": "bdev_raid_create", 00:15:00.559 "req_id": 1 00:15:00.559 } 00:15:00.559 Got JSON-RPC error response 00:15:00.559 response: 00:15:00.559 { 00:15:00.559 "code": -17, 00:15:00.559 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:00.559 } 00:15:00.559 18:46:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:00.559 18:46:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # es=1 00:15:00.559 18:46:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:00.559 18:46:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:00.559 18:46:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:00.559 18:46:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.559 18:46:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:15:00.559 18:46:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.559 18:46:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:00.559 18:46:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.560 18:46:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:15:00.560 18:46:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:15:00.560 18:46:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:00.560 18:46:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.560 18:46:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:00.560 [2024-12-15 18:46:00.894009] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:00.560 [2024-12-15 18:46:00.894103] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:00.560 [2024-12-15 18:46:00.894125] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:00.560 [2024-12-15 18:46:00.894133] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:00.560 [2024-12-15 18:46:00.896184] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:00.560 [2024-12-15 18:46:00.896220] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:00.560 [2024-12-15 18:46:00.896290] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:00.560 [2024-12-15 18:46:00.896332] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:00.560 pt1 00:15:00.560 18:46:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.560 18:46:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:15:00.560 18:46:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:00.560 18:46:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:00.560 18:46:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:00.560 18:46:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:00.560 18:46:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:00.560 18:46:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:00.560 18:46:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:00.560 18:46:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:00.560 18:46:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:00.560 18:46:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.560 18:46:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:00.560 18:46:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.560 18:46:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:00.560 18:46:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.560 18:46:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:00.560 "name": "raid_bdev1", 00:15:00.560 "uuid": "b95cbb4a-f7c8-4b24-ada5-3a5123bdb178", 00:15:00.560 "strip_size_kb": 0, 00:15:00.560 "state": "configuring", 00:15:00.560 "raid_level": "raid1", 00:15:00.560 "superblock": true, 00:15:00.560 "num_base_bdevs": 2, 00:15:00.560 "num_base_bdevs_discovered": 1, 00:15:00.560 "num_base_bdevs_operational": 2, 00:15:00.560 "base_bdevs_list": [ 00:15:00.560 { 00:15:00.560 "name": "pt1", 00:15:00.560 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:00.560 "is_configured": true, 00:15:00.560 "data_offset": 256, 00:15:00.560 "data_size": 7936 00:15:00.560 }, 00:15:00.560 { 00:15:00.560 "name": null, 00:15:00.560 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:00.560 "is_configured": false, 00:15:00.560 "data_offset": 256, 00:15:00.560 "data_size": 7936 00:15:00.560 } 00:15:00.560 ] 00:15:00.560 }' 00:15:00.560 18:46:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:00.560 18:46:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:01.129 18:46:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:15:01.129 18:46:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:15:01.129 18:46:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:01.129 18:46:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:01.129 18:46:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.129 18:46:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:01.129 [2024-12-15 18:46:01.401201] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:01.129 [2024-12-15 18:46:01.401339] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:01.129 [2024-12-15 18:46:01.401382] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:15:01.129 [2024-12-15 18:46:01.401410] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:01.129 [2024-12-15 18:46:01.401869] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:01.129 [2024-12-15 18:46:01.401937] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:01.129 [2024-12-15 18:46:01.402043] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:01.129 [2024-12-15 18:46:01.402093] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:01.129 [2024-12-15 18:46:01.402208] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:15:01.129 [2024-12-15 18:46:01.402244] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:01.129 [2024-12-15 18:46:01.402489] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:15:01.129 [2024-12-15 18:46:01.402643] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:15:01.129 [2024-12-15 18:46:01.402688] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:15:01.129 [2024-12-15 18:46:01.402840] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:01.129 pt2 00:15:01.129 18:46:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.129 18:46:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:01.129 18:46:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:01.129 18:46:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:01.129 18:46:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:01.129 18:46:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:01.129 18:46:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:01.129 18:46:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:01.129 18:46:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:01.129 18:46:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:01.129 18:46:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:01.129 18:46:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:01.129 18:46:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:01.129 18:46:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.129 18:46:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:01.129 18:46:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.129 18:46:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:01.129 18:46:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.129 18:46:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:01.129 "name": "raid_bdev1", 00:15:01.129 "uuid": "b95cbb4a-f7c8-4b24-ada5-3a5123bdb178", 00:15:01.129 "strip_size_kb": 0, 00:15:01.129 "state": "online", 00:15:01.129 "raid_level": "raid1", 00:15:01.129 "superblock": true, 00:15:01.129 "num_base_bdevs": 2, 00:15:01.129 "num_base_bdevs_discovered": 2, 00:15:01.129 "num_base_bdevs_operational": 2, 00:15:01.129 "base_bdevs_list": [ 00:15:01.129 { 00:15:01.129 "name": "pt1", 00:15:01.129 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:01.129 "is_configured": true, 00:15:01.129 "data_offset": 256, 00:15:01.129 "data_size": 7936 00:15:01.129 }, 00:15:01.129 { 00:15:01.129 "name": "pt2", 00:15:01.129 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:01.129 "is_configured": true, 00:15:01.129 "data_offset": 256, 00:15:01.129 "data_size": 7936 00:15:01.129 } 00:15:01.129 ] 00:15:01.129 }' 00:15:01.129 18:46:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:01.129 18:46:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:01.699 18:46:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:15:01.699 18:46:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:01.699 18:46:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:01.699 18:46:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:01.699 18:46:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:15:01.699 18:46:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:01.699 18:46:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:01.699 18:46:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:01.699 18:46:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.699 18:46:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:01.699 [2024-12-15 18:46:01.856758] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:01.699 18:46:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.699 18:46:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:01.699 "name": "raid_bdev1", 00:15:01.699 "aliases": [ 00:15:01.699 "b95cbb4a-f7c8-4b24-ada5-3a5123bdb178" 00:15:01.699 ], 00:15:01.699 "product_name": "Raid Volume", 00:15:01.699 "block_size": 4096, 00:15:01.699 "num_blocks": 7936, 00:15:01.699 "uuid": "b95cbb4a-f7c8-4b24-ada5-3a5123bdb178", 00:15:01.699 "assigned_rate_limits": { 00:15:01.699 "rw_ios_per_sec": 0, 00:15:01.699 "rw_mbytes_per_sec": 0, 00:15:01.699 "r_mbytes_per_sec": 0, 00:15:01.699 "w_mbytes_per_sec": 0 00:15:01.699 }, 00:15:01.699 "claimed": false, 00:15:01.699 "zoned": false, 00:15:01.699 "supported_io_types": { 00:15:01.699 "read": true, 00:15:01.699 "write": true, 00:15:01.699 "unmap": false, 00:15:01.699 "flush": false, 00:15:01.699 "reset": true, 00:15:01.699 "nvme_admin": false, 00:15:01.699 "nvme_io": false, 00:15:01.699 "nvme_io_md": false, 00:15:01.699 "write_zeroes": true, 00:15:01.699 "zcopy": false, 00:15:01.699 "get_zone_info": false, 00:15:01.699 "zone_management": false, 00:15:01.699 "zone_append": false, 00:15:01.699 "compare": false, 00:15:01.699 "compare_and_write": false, 00:15:01.699 "abort": false, 00:15:01.699 "seek_hole": false, 00:15:01.699 "seek_data": false, 00:15:01.699 "copy": false, 00:15:01.699 "nvme_iov_md": false 00:15:01.699 }, 00:15:01.699 "memory_domains": [ 00:15:01.699 { 00:15:01.699 "dma_device_id": "system", 00:15:01.699 "dma_device_type": 1 00:15:01.699 }, 00:15:01.699 { 00:15:01.699 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:01.699 "dma_device_type": 2 00:15:01.699 }, 00:15:01.699 { 00:15:01.699 "dma_device_id": "system", 00:15:01.699 "dma_device_type": 1 00:15:01.699 }, 00:15:01.699 { 00:15:01.699 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:01.699 "dma_device_type": 2 00:15:01.699 } 00:15:01.699 ], 00:15:01.699 "driver_specific": { 00:15:01.699 "raid": { 00:15:01.699 "uuid": "b95cbb4a-f7c8-4b24-ada5-3a5123bdb178", 00:15:01.699 "strip_size_kb": 0, 00:15:01.699 "state": "online", 00:15:01.699 "raid_level": "raid1", 00:15:01.699 "superblock": true, 00:15:01.699 "num_base_bdevs": 2, 00:15:01.699 "num_base_bdevs_discovered": 2, 00:15:01.699 "num_base_bdevs_operational": 2, 00:15:01.699 "base_bdevs_list": [ 00:15:01.699 { 00:15:01.699 "name": "pt1", 00:15:01.699 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:01.699 "is_configured": true, 00:15:01.699 "data_offset": 256, 00:15:01.700 "data_size": 7936 00:15:01.700 }, 00:15:01.700 { 00:15:01.700 "name": "pt2", 00:15:01.700 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:01.700 "is_configured": true, 00:15:01.700 "data_offset": 256, 00:15:01.700 "data_size": 7936 00:15:01.700 } 00:15:01.700 ] 00:15:01.700 } 00:15:01.700 } 00:15:01.700 }' 00:15:01.700 18:46:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:01.700 18:46:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:01.700 pt2' 00:15:01.700 18:46:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:01.700 18:46:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:15:01.700 18:46:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:01.700 18:46:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:01.700 18:46:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:01.700 18:46:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.700 18:46:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:01.700 18:46:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.700 18:46:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:15:01.700 18:46:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:15:01.700 18:46:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:01.700 18:46:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:01.700 18:46:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:01.700 18:46:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.700 18:46:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:01.700 18:46:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.700 18:46:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:15:01.700 18:46:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:15:01.700 18:46:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:01.700 18:46:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:15:01.700 18:46:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.700 18:46:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:01.700 [2024-12-15 18:46:02.088317] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:01.700 18:46:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.700 18:46:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # '[' b95cbb4a-f7c8-4b24-ada5-3a5123bdb178 '!=' b95cbb4a-f7c8-4b24-ada5-3a5123bdb178 ']' 00:15:01.700 18:46:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:15:01.700 18:46:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:01.700 18:46:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:15:01.700 18:46:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:15:01.700 18:46:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.700 18:46:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:01.700 [2024-12-15 18:46:02.132035] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:15:01.960 18:46:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.960 18:46:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:01.960 18:46:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:01.960 18:46:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:01.960 18:46:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:01.960 18:46:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:01.960 18:46:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:01.960 18:46:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:01.960 18:46:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:01.960 18:46:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:01.960 18:46:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:01.960 18:46:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.960 18:46:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.960 18:46:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:01.960 18:46:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:01.960 18:46:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.960 18:46:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:01.960 "name": "raid_bdev1", 00:15:01.960 "uuid": "b95cbb4a-f7c8-4b24-ada5-3a5123bdb178", 00:15:01.960 "strip_size_kb": 0, 00:15:01.960 "state": "online", 00:15:01.960 "raid_level": "raid1", 00:15:01.960 "superblock": true, 00:15:01.960 "num_base_bdevs": 2, 00:15:01.960 "num_base_bdevs_discovered": 1, 00:15:01.960 "num_base_bdevs_operational": 1, 00:15:01.960 "base_bdevs_list": [ 00:15:01.960 { 00:15:01.960 "name": null, 00:15:01.960 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:01.960 "is_configured": false, 00:15:01.960 "data_offset": 0, 00:15:01.960 "data_size": 7936 00:15:01.960 }, 00:15:01.960 { 00:15:01.960 "name": "pt2", 00:15:01.960 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:01.960 "is_configured": true, 00:15:01.960 "data_offset": 256, 00:15:01.960 "data_size": 7936 00:15:01.960 } 00:15:01.960 ] 00:15:01.960 }' 00:15:01.960 18:46:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:01.960 18:46:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:02.220 18:46:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:02.220 18:46:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.220 18:46:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:02.220 [2024-12-15 18:46:02.611155] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:02.220 [2024-12-15 18:46:02.611239] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:02.220 [2024-12-15 18:46:02.611347] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:02.220 [2024-12-15 18:46:02.611436] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:02.220 [2024-12-15 18:46:02.611506] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:15:02.220 18:46:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.220 18:46:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.220 18:46:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:15:02.220 18:46:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.220 18:46:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:02.220 18:46:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.481 18:46:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:15:02.481 18:46:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:15:02.481 18:46:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:15:02.481 18:46:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:02.481 18:46:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:15:02.481 18:46:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.481 18:46:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:02.481 18:46:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.481 18:46:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:02.481 18:46:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:02.481 18:46:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:15:02.481 18:46:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:02.481 18:46:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # i=1 00:15:02.481 18:46:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:02.481 18:46:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.481 18:46:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:02.481 [2024-12-15 18:46:02.683046] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:02.481 [2024-12-15 18:46:02.683151] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:02.481 [2024-12-15 18:46:02.683193] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:15:02.481 [2024-12-15 18:46:02.683227] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:02.481 [2024-12-15 18:46:02.685865] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:02.481 [2024-12-15 18:46:02.685952] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:02.481 [2024-12-15 18:46:02.686082] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:02.481 [2024-12-15 18:46:02.686143] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:02.481 [2024-12-15 18:46:02.686264] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:15:02.481 [2024-12-15 18:46:02.686309] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:02.481 [2024-12-15 18:46:02.686582] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:15:02.481 [2024-12-15 18:46:02.686776] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:15:02.481 pt2 00:15:02.481 [2024-12-15 18:46:02.686852] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:15:02.481 [2024-12-15 18:46:02.687023] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:02.481 18:46:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.481 18:46:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:02.481 18:46:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:02.481 18:46:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:02.481 18:46:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:02.481 18:46:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:02.481 18:46:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:02.481 18:46:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:02.481 18:46:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:02.481 18:46:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:02.481 18:46:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:02.481 18:46:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:02.481 18:46:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.481 18:46:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.481 18:46:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:02.481 18:46:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.481 18:46:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:02.481 "name": "raid_bdev1", 00:15:02.481 "uuid": "b95cbb4a-f7c8-4b24-ada5-3a5123bdb178", 00:15:02.481 "strip_size_kb": 0, 00:15:02.481 "state": "online", 00:15:02.481 "raid_level": "raid1", 00:15:02.481 "superblock": true, 00:15:02.481 "num_base_bdevs": 2, 00:15:02.481 "num_base_bdevs_discovered": 1, 00:15:02.481 "num_base_bdevs_operational": 1, 00:15:02.481 "base_bdevs_list": [ 00:15:02.481 { 00:15:02.481 "name": null, 00:15:02.481 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:02.481 "is_configured": false, 00:15:02.481 "data_offset": 256, 00:15:02.481 "data_size": 7936 00:15:02.481 }, 00:15:02.481 { 00:15:02.481 "name": "pt2", 00:15:02.481 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:02.481 "is_configured": true, 00:15:02.481 "data_offset": 256, 00:15:02.481 "data_size": 7936 00:15:02.481 } 00:15:02.481 ] 00:15:02.481 }' 00:15:02.481 18:46:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:02.481 18:46:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:02.742 18:46:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:02.742 18:46:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.742 18:46:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:02.742 [2024-12-15 18:46:03.070406] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:02.742 [2024-12-15 18:46:03.070491] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:02.742 [2024-12-15 18:46:03.070595] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:02.742 [2024-12-15 18:46:03.070680] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:02.742 [2024-12-15 18:46:03.070734] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:15:02.742 18:46:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.742 18:46:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:15:02.742 18:46:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.742 18:46:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.742 18:46:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:02.742 18:46:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.742 18:46:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:15:02.742 18:46:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:15:02.742 18:46:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:15:02.742 18:46:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:02.742 18:46:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.742 18:46:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:02.742 [2024-12-15 18:46:03.130265] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:02.742 [2024-12-15 18:46:03.130332] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:02.742 [2024-12-15 18:46:03.130350] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:15:02.742 [2024-12-15 18:46:03.130368] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:02.742 [2024-12-15 18:46:03.132961] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:02.742 [2024-12-15 18:46:03.133059] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:02.742 [2024-12-15 18:46:03.133164] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:02.742 [2024-12-15 18:46:03.133219] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:02.742 [2024-12-15 18:46:03.133348] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:15:02.742 [2024-12-15 18:46:03.133370] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:02.742 [2024-12-15 18:46:03.133390] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state configuring 00:15:02.742 [2024-12-15 18:46:03.133426] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:02.742 [2024-12-15 18:46:03.133508] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:15:02.742 [2024-12-15 18:46:03.133522] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:02.742 [2024-12-15 18:46:03.133755] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:02.742 [2024-12-15 18:46:03.133924] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:15:02.742 [2024-12-15 18:46:03.133937] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:15:02.742 [2024-12-15 18:46:03.134069] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:02.742 pt1 00:15:02.742 18:46:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.742 18:46:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:15:02.742 18:46:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:02.742 18:46:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:02.742 18:46:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:02.742 18:46:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:02.742 18:46:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:02.742 18:46:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:02.742 18:46:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:02.742 18:46:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:02.742 18:46:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:02.742 18:46:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:02.742 18:46:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.742 18:46:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:02.742 18:46:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.742 18:46:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:02.742 18:46:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.002 18:46:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:03.002 "name": "raid_bdev1", 00:15:03.002 "uuid": "b95cbb4a-f7c8-4b24-ada5-3a5123bdb178", 00:15:03.002 "strip_size_kb": 0, 00:15:03.002 "state": "online", 00:15:03.002 "raid_level": "raid1", 00:15:03.002 "superblock": true, 00:15:03.002 "num_base_bdevs": 2, 00:15:03.002 "num_base_bdevs_discovered": 1, 00:15:03.002 "num_base_bdevs_operational": 1, 00:15:03.002 "base_bdevs_list": [ 00:15:03.002 { 00:15:03.002 "name": null, 00:15:03.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:03.002 "is_configured": false, 00:15:03.002 "data_offset": 256, 00:15:03.002 "data_size": 7936 00:15:03.002 }, 00:15:03.002 { 00:15:03.002 "name": "pt2", 00:15:03.002 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:03.002 "is_configured": true, 00:15:03.002 "data_offset": 256, 00:15:03.002 "data_size": 7936 00:15:03.002 } 00:15:03.002 ] 00:15:03.002 }' 00:15:03.002 18:46:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:03.002 18:46:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:03.320 18:46:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:03.320 18:46:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:15:03.320 18:46:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.320 18:46:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:03.320 18:46:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.320 18:46:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:15:03.320 18:46:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:03.320 18:46:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:15:03.320 18:46:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.320 18:46:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:03.320 [2024-12-15 18:46:03.605684] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:03.320 18:46:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.320 18:46:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # '[' b95cbb4a-f7c8-4b24-ada5-3a5123bdb178 '!=' b95cbb4a-f7c8-4b24-ada5-3a5123bdb178 ']' 00:15:03.320 18:46:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@563 -- # killprocess 98472 00:15:03.320 18:46:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # '[' -z 98472 ']' 00:15:03.320 18:46:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@958 -- # kill -0 98472 00:15:03.320 18:46:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # uname 00:15:03.320 18:46:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:03.320 18:46:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 98472 00:15:03.320 killing process with pid 98472 00:15:03.320 18:46:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:03.320 18:46:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:03.320 18:46:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 98472' 00:15:03.320 18:46:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@973 -- # kill 98472 00:15:03.320 [2024-12-15 18:46:03.678839] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:03.320 [2024-12-15 18:46:03.678908] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:03.320 [2024-12-15 18:46:03.678953] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:03.320 [2024-12-15 18:46:03.678963] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:15:03.320 18:46:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@978 -- # wait 98472 00:15:03.320 [2024-12-15 18:46:03.719842] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:03.889 18:46:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@565 -- # return 0 00:15:03.889 00:15:03.889 real 0m5.087s 00:15:03.889 user 0m8.263s 00:15:03.889 sys 0m1.071s 00:15:03.889 18:46:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:03.889 18:46:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:03.889 ************************************ 00:15:03.889 END TEST raid_superblock_test_4k 00:15:03.889 ************************************ 00:15:03.889 18:46:04 bdev_raid -- bdev/bdev_raid.sh@999 -- # '[' true = true ']' 00:15:03.889 18:46:04 bdev_raid -- bdev/bdev_raid.sh@1000 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:15:03.889 18:46:04 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:03.889 18:46:04 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:03.889 18:46:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:03.889 ************************************ 00:15:03.889 START TEST raid_rebuild_test_sb_4k 00:15:03.889 ************************************ 00:15:03.889 18:46:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:15:03.889 18:46:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:15:03.889 18:46:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:15:03.889 18:46:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:15:03.890 18:46:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:03.890 18:46:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:03.890 18:46:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:03.890 18:46:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:03.890 18:46:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:03.890 18:46:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:03.890 18:46:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:03.890 18:46:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:03.890 18:46:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:03.890 18:46:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:03.890 18:46:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:03.890 18:46:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:03.890 18:46:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:03.890 18:46:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:03.890 18:46:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:03.890 18:46:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:03.890 18:46:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:03.890 18:46:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:15:03.890 18:46:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:15:03.890 18:46:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:15:03.890 18:46:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:15:03.890 18:46:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # raid_pid=98789 00:15:03.890 18:46:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@598 -- # waitforlisten 98789 00:15:03.890 18:46:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:03.890 18:46:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 98789 ']' 00:15:03.890 18:46:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:03.890 18:46:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:03.890 18:46:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:03.890 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:03.890 18:46:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:03.890 18:46:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:03.890 [2024-12-15 18:46:04.229381] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:15:03.890 [2024-12-15 18:46:04.229592] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:15:03.890 Zero copy mechanism will not be used. 00:15:03.890 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98789 ] 00:15:04.150 [2024-12-15 18:46:04.397129] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:04.150 [2024-12-15 18:46:04.436713] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:04.150 [2024-12-15 18:46:04.515780] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:04.150 [2024-12-15 18:46:04.515842] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:04.720 18:46:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:04.720 18:46:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:15:04.720 18:46:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:04.720 18:46:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:15:04.720 18:46:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.720 18:46:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:04.720 BaseBdev1_malloc 00:15:04.720 18:46:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.720 18:46:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:04.720 18:46:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.720 18:46:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:04.720 [2024-12-15 18:46:05.079611] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:04.720 [2024-12-15 18:46:05.079757] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:04.720 [2024-12-15 18:46:05.079830] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:04.720 [2024-12-15 18:46:05.079926] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:04.720 [2024-12-15 18:46:05.082495] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:04.720 [2024-12-15 18:46:05.082584] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:04.720 BaseBdev1 00:15:04.720 18:46:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.720 18:46:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:04.720 18:46:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:15:04.720 18:46:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.720 18:46:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:04.720 BaseBdev2_malloc 00:15:04.720 18:46:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.720 18:46:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:04.720 18:46:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.720 18:46:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:04.720 [2024-12-15 18:46:05.118923] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:04.720 [2024-12-15 18:46:05.118985] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:04.720 [2024-12-15 18:46:05.119010] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:04.720 [2024-12-15 18:46:05.119021] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:04.720 [2024-12-15 18:46:05.121579] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:04.720 [2024-12-15 18:46:05.121667] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:04.720 BaseBdev2 00:15:04.720 18:46:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.720 18:46:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -b spare_malloc 00:15:04.720 18:46:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.720 18:46:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:04.720 spare_malloc 00:15:04.720 18:46:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.720 18:46:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:04.720 18:46:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.720 18:46:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:04.720 spare_delay 00:15:04.720 18:46:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.720 18:46:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:04.720 18:46:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.720 18:46:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:04.720 [2024-12-15 18:46:05.158207] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:04.720 [2024-12-15 18:46:05.158269] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:04.720 [2024-12-15 18:46:05.158294] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:15:04.720 [2024-12-15 18:46:05.158303] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:04.980 [2024-12-15 18:46:05.160854] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:04.980 [2024-12-15 18:46:05.160891] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:04.980 spare 00:15:04.980 18:46:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.980 18:46:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:15:04.980 18:46:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.980 18:46:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:04.980 [2024-12-15 18:46:05.170213] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:04.980 [2024-12-15 18:46:05.172460] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:04.980 [2024-12-15 18:46:05.172632] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:15:04.980 [2024-12-15 18:46:05.172646] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:04.980 [2024-12-15 18:46:05.172950] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:15:04.980 [2024-12-15 18:46:05.173129] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:15:04.980 [2024-12-15 18:46:05.173172] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:15:04.980 [2024-12-15 18:46:05.173305] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:04.980 18:46:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.980 18:46:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:04.980 18:46:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:04.980 18:46:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:04.980 18:46:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:04.980 18:46:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:04.980 18:46:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:04.980 18:46:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:04.980 18:46:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:04.980 18:46:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:04.980 18:46:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:04.980 18:46:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.980 18:46:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:04.980 18:46:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.980 18:46:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:04.980 18:46:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.980 18:46:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:04.980 "name": "raid_bdev1", 00:15:04.980 "uuid": "0f0700bc-3a81-49ea-ad22-a6f77524a97a", 00:15:04.980 "strip_size_kb": 0, 00:15:04.980 "state": "online", 00:15:04.980 "raid_level": "raid1", 00:15:04.980 "superblock": true, 00:15:04.980 "num_base_bdevs": 2, 00:15:04.980 "num_base_bdevs_discovered": 2, 00:15:04.980 "num_base_bdevs_operational": 2, 00:15:04.980 "base_bdevs_list": [ 00:15:04.980 { 00:15:04.980 "name": "BaseBdev1", 00:15:04.980 "uuid": "0d50af23-725d-53e1-ac12-7a6e710a555c", 00:15:04.980 "is_configured": true, 00:15:04.980 "data_offset": 256, 00:15:04.980 "data_size": 7936 00:15:04.980 }, 00:15:04.980 { 00:15:04.980 "name": "BaseBdev2", 00:15:04.980 "uuid": "17ed819b-559e-5a90-8231-6d33e0a2db73", 00:15:04.980 "is_configured": true, 00:15:04.980 "data_offset": 256, 00:15:04.980 "data_size": 7936 00:15:04.980 } 00:15:04.980 ] 00:15:04.980 }' 00:15:04.980 18:46:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:04.980 18:46:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:05.239 18:46:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:05.239 18:46:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.239 18:46:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:05.239 18:46:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:05.239 [2024-12-15 18:46:05.649637] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:05.239 18:46:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.499 18:46:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:15:05.500 18:46:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.500 18:46:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.500 18:46:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:05.500 18:46:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:05.500 18:46:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.500 18:46:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:15:05.500 18:46:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:05.500 18:46:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:05.500 18:46:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:05.500 18:46:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:05.500 18:46:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:05.500 18:46:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:05.500 18:46:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:05.500 18:46:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:05.500 18:46:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:05.500 18:46:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:15:05.500 18:46:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:05.500 18:46:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:05.500 18:46:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:05.500 [2024-12-15 18:46:05.916973] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:05.760 /dev/nbd0 00:15:05.760 18:46:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:05.760 18:46:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:05.760 18:46:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:05.760 18:46:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:15:05.760 18:46:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:05.760 18:46:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:05.760 18:46:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:05.760 18:46:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:15:05.760 18:46:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:05.760 18:46:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:05.760 18:46:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:05.760 1+0 records in 00:15:05.760 1+0 records out 00:15:05.760 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000441144 s, 9.3 MB/s 00:15:05.760 18:46:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:05.760 18:46:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:15:05.760 18:46:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:05.760 18:46:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:05.760 18:46:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:15:05.760 18:46:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:05.760 18:46:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:05.760 18:46:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:15:05.760 18:46:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:15:05.760 18:46:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:15:06.329 7936+0 records in 00:15:06.329 7936+0 records out 00:15:06.329 32505856 bytes (33 MB, 31 MiB) copied, 0.578147 s, 56.2 MB/s 00:15:06.329 18:46:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:06.329 18:46:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:06.329 18:46:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:06.329 18:46:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:06.329 18:46:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:15:06.329 18:46:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:06.329 18:46:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:06.589 18:46:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:06.589 18:46:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:06.589 [2024-12-15 18:46:06.781851] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:06.589 18:46:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:06.589 18:46:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:06.589 18:46:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:06.589 18:46:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:06.589 18:46:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:15:06.589 18:46:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:15:06.589 18:46:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:06.589 18:46:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.589 18:46:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:06.589 [2024-12-15 18:46:06.797910] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:06.589 18:46:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.589 18:46:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:06.589 18:46:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:06.589 18:46:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:06.589 18:46:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:06.589 18:46:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:06.589 18:46:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:06.590 18:46:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:06.590 18:46:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:06.590 18:46:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:06.590 18:46:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:06.590 18:46:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.590 18:46:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:06.590 18:46:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.590 18:46:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:06.590 18:46:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.590 18:46:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:06.590 "name": "raid_bdev1", 00:15:06.590 "uuid": "0f0700bc-3a81-49ea-ad22-a6f77524a97a", 00:15:06.590 "strip_size_kb": 0, 00:15:06.590 "state": "online", 00:15:06.590 "raid_level": "raid1", 00:15:06.590 "superblock": true, 00:15:06.590 "num_base_bdevs": 2, 00:15:06.590 "num_base_bdevs_discovered": 1, 00:15:06.590 "num_base_bdevs_operational": 1, 00:15:06.590 "base_bdevs_list": [ 00:15:06.590 { 00:15:06.590 "name": null, 00:15:06.590 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:06.590 "is_configured": false, 00:15:06.590 "data_offset": 0, 00:15:06.590 "data_size": 7936 00:15:06.590 }, 00:15:06.590 { 00:15:06.590 "name": "BaseBdev2", 00:15:06.590 "uuid": "17ed819b-559e-5a90-8231-6d33e0a2db73", 00:15:06.590 "is_configured": true, 00:15:06.590 "data_offset": 256, 00:15:06.590 "data_size": 7936 00:15:06.590 } 00:15:06.590 ] 00:15:06.590 }' 00:15:06.590 18:46:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:06.590 18:46:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:06.850 18:46:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:06.850 18:46:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.850 18:46:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:06.850 [2024-12-15 18:46:07.225144] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:06.850 [2024-12-15 18:46:07.230335] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d0c0 00:15:06.850 18:46:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.850 18:46:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:06.850 [2024-12-15 18:46:07.232217] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:08.231 18:46:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:08.231 18:46:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:08.231 18:46:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:08.231 18:46:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:08.231 18:46:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:08.231 18:46:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:08.231 18:46:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:08.231 18:46:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.231 18:46:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:08.231 18:46:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.231 18:46:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:08.231 "name": "raid_bdev1", 00:15:08.231 "uuid": "0f0700bc-3a81-49ea-ad22-a6f77524a97a", 00:15:08.231 "strip_size_kb": 0, 00:15:08.231 "state": "online", 00:15:08.231 "raid_level": "raid1", 00:15:08.231 "superblock": true, 00:15:08.231 "num_base_bdevs": 2, 00:15:08.231 "num_base_bdevs_discovered": 2, 00:15:08.231 "num_base_bdevs_operational": 2, 00:15:08.231 "process": { 00:15:08.231 "type": "rebuild", 00:15:08.231 "target": "spare", 00:15:08.231 "progress": { 00:15:08.231 "blocks": 2560, 00:15:08.231 "percent": 32 00:15:08.231 } 00:15:08.231 }, 00:15:08.231 "base_bdevs_list": [ 00:15:08.231 { 00:15:08.231 "name": "spare", 00:15:08.231 "uuid": "56a3535e-0a44-549e-81c8-f41a8f90cefe", 00:15:08.231 "is_configured": true, 00:15:08.231 "data_offset": 256, 00:15:08.231 "data_size": 7936 00:15:08.231 }, 00:15:08.231 { 00:15:08.231 "name": "BaseBdev2", 00:15:08.231 "uuid": "17ed819b-559e-5a90-8231-6d33e0a2db73", 00:15:08.231 "is_configured": true, 00:15:08.231 "data_offset": 256, 00:15:08.231 "data_size": 7936 00:15:08.231 } 00:15:08.231 ] 00:15:08.231 }' 00:15:08.231 18:46:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:08.231 18:46:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:08.231 18:46:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:08.231 18:46:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:08.231 18:46:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:08.231 18:46:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.231 18:46:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:08.231 [2024-12-15 18:46:08.396423] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:08.231 [2024-12-15 18:46:08.436886] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:08.231 [2024-12-15 18:46:08.436940] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:08.231 [2024-12-15 18:46:08.436974] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:08.231 [2024-12-15 18:46:08.436982] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:08.231 18:46:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.231 18:46:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:08.231 18:46:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:08.231 18:46:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:08.231 18:46:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:08.231 18:46:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:08.231 18:46:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:08.231 18:46:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:08.231 18:46:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:08.231 18:46:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:08.231 18:46:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:08.231 18:46:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:08.231 18:46:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:08.231 18:46:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.231 18:46:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:08.231 18:46:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.231 18:46:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:08.231 "name": "raid_bdev1", 00:15:08.231 "uuid": "0f0700bc-3a81-49ea-ad22-a6f77524a97a", 00:15:08.231 "strip_size_kb": 0, 00:15:08.231 "state": "online", 00:15:08.231 "raid_level": "raid1", 00:15:08.231 "superblock": true, 00:15:08.231 "num_base_bdevs": 2, 00:15:08.231 "num_base_bdevs_discovered": 1, 00:15:08.231 "num_base_bdevs_operational": 1, 00:15:08.231 "base_bdevs_list": [ 00:15:08.231 { 00:15:08.231 "name": null, 00:15:08.231 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:08.231 "is_configured": false, 00:15:08.231 "data_offset": 0, 00:15:08.231 "data_size": 7936 00:15:08.231 }, 00:15:08.231 { 00:15:08.231 "name": "BaseBdev2", 00:15:08.231 "uuid": "17ed819b-559e-5a90-8231-6d33e0a2db73", 00:15:08.231 "is_configured": true, 00:15:08.231 "data_offset": 256, 00:15:08.231 "data_size": 7936 00:15:08.231 } 00:15:08.231 ] 00:15:08.231 }' 00:15:08.231 18:46:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:08.231 18:46:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:08.491 18:46:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:08.491 18:46:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:08.491 18:46:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:08.491 18:46:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:08.491 18:46:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:08.751 18:46:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:08.751 18:46:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:08.751 18:46:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.751 18:46:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:08.751 18:46:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.751 18:46:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:08.751 "name": "raid_bdev1", 00:15:08.751 "uuid": "0f0700bc-3a81-49ea-ad22-a6f77524a97a", 00:15:08.751 "strip_size_kb": 0, 00:15:08.751 "state": "online", 00:15:08.751 "raid_level": "raid1", 00:15:08.751 "superblock": true, 00:15:08.751 "num_base_bdevs": 2, 00:15:08.751 "num_base_bdevs_discovered": 1, 00:15:08.751 "num_base_bdevs_operational": 1, 00:15:08.751 "base_bdevs_list": [ 00:15:08.751 { 00:15:08.751 "name": null, 00:15:08.751 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:08.751 "is_configured": false, 00:15:08.751 "data_offset": 0, 00:15:08.751 "data_size": 7936 00:15:08.751 }, 00:15:08.751 { 00:15:08.751 "name": "BaseBdev2", 00:15:08.751 "uuid": "17ed819b-559e-5a90-8231-6d33e0a2db73", 00:15:08.751 "is_configured": true, 00:15:08.751 "data_offset": 256, 00:15:08.751 "data_size": 7936 00:15:08.751 } 00:15:08.751 ] 00:15:08.751 }' 00:15:08.751 18:46:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:08.751 18:46:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:08.751 18:46:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:08.751 18:46:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:08.751 18:46:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:08.751 18:46:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.751 18:46:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:08.751 [2024-12-15 18:46:09.084689] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:08.751 [2024-12-15 18:46:09.089348] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d190 00:15:08.751 18:46:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.751 18:46:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:08.751 [2024-12-15 18:46:09.091213] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:09.690 18:46:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:09.690 18:46:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:09.690 18:46:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:09.690 18:46:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:09.690 18:46:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:09.690 18:46:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:09.690 18:46:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:09.690 18:46:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.690 18:46:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:09.690 18:46:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.949 18:46:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:09.949 "name": "raid_bdev1", 00:15:09.949 "uuid": "0f0700bc-3a81-49ea-ad22-a6f77524a97a", 00:15:09.949 "strip_size_kb": 0, 00:15:09.949 "state": "online", 00:15:09.949 "raid_level": "raid1", 00:15:09.949 "superblock": true, 00:15:09.949 "num_base_bdevs": 2, 00:15:09.949 "num_base_bdevs_discovered": 2, 00:15:09.949 "num_base_bdevs_operational": 2, 00:15:09.949 "process": { 00:15:09.949 "type": "rebuild", 00:15:09.949 "target": "spare", 00:15:09.949 "progress": { 00:15:09.949 "blocks": 2560, 00:15:09.949 "percent": 32 00:15:09.949 } 00:15:09.949 }, 00:15:09.949 "base_bdevs_list": [ 00:15:09.949 { 00:15:09.949 "name": "spare", 00:15:09.949 "uuid": "56a3535e-0a44-549e-81c8-f41a8f90cefe", 00:15:09.949 "is_configured": true, 00:15:09.949 "data_offset": 256, 00:15:09.949 "data_size": 7936 00:15:09.949 }, 00:15:09.949 { 00:15:09.949 "name": "BaseBdev2", 00:15:09.949 "uuid": "17ed819b-559e-5a90-8231-6d33e0a2db73", 00:15:09.949 "is_configured": true, 00:15:09.949 "data_offset": 256, 00:15:09.949 "data_size": 7936 00:15:09.949 } 00:15:09.949 ] 00:15:09.949 }' 00:15:09.949 18:46:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:09.949 18:46:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:09.949 18:46:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:09.949 18:46:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:09.949 18:46:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:15:09.949 18:46:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:15:09.949 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:15:09.949 18:46:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:15:09.949 18:46:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:15:09.949 18:46:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:15:09.949 18:46:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # local timeout=567 00:15:09.949 18:46:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:09.949 18:46:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:09.949 18:46:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:09.949 18:46:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:09.949 18:46:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:09.949 18:46:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:09.949 18:46:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:09.949 18:46:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.949 18:46:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:09.949 18:46:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:09.949 18:46:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.949 18:46:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:09.949 "name": "raid_bdev1", 00:15:09.949 "uuid": "0f0700bc-3a81-49ea-ad22-a6f77524a97a", 00:15:09.949 "strip_size_kb": 0, 00:15:09.949 "state": "online", 00:15:09.949 "raid_level": "raid1", 00:15:09.949 "superblock": true, 00:15:09.949 "num_base_bdevs": 2, 00:15:09.949 "num_base_bdevs_discovered": 2, 00:15:09.949 "num_base_bdevs_operational": 2, 00:15:09.949 "process": { 00:15:09.949 "type": "rebuild", 00:15:09.949 "target": "spare", 00:15:09.949 "progress": { 00:15:09.949 "blocks": 2816, 00:15:09.949 "percent": 35 00:15:09.949 } 00:15:09.949 }, 00:15:09.949 "base_bdevs_list": [ 00:15:09.949 { 00:15:09.949 "name": "spare", 00:15:09.949 "uuid": "56a3535e-0a44-549e-81c8-f41a8f90cefe", 00:15:09.949 "is_configured": true, 00:15:09.949 "data_offset": 256, 00:15:09.949 "data_size": 7936 00:15:09.949 }, 00:15:09.949 { 00:15:09.949 "name": "BaseBdev2", 00:15:09.949 "uuid": "17ed819b-559e-5a90-8231-6d33e0a2db73", 00:15:09.949 "is_configured": true, 00:15:09.949 "data_offset": 256, 00:15:09.949 "data_size": 7936 00:15:09.949 } 00:15:09.949 ] 00:15:09.949 }' 00:15:09.949 18:46:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:09.949 18:46:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:09.949 18:46:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:09.949 18:46:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:09.949 18:46:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:11.332 18:46:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:11.332 18:46:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:11.332 18:46:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:11.332 18:46:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:11.332 18:46:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:11.332 18:46:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:11.332 18:46:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.332 18:46:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.332 18:46:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:11.332 18:46:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:11.332 18:46:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.332 18:46:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:11.332 "name": "raid_bdev1", 00:15:11.332 "uuid": "0f0700bc-3a81-49ea-ad22-a6f77524a97a", 00:15:11.332 "strip_size_kb": 0, 00:15:11.332 "state": "online", 00:15:11.332 "raid_level": "raid1", 00:15:11.332 "superblock": true, 00:15:11.332 "num_base_bdevs": 2, 00:15:11.332 "num_base_bdevs_discovered": 2, 00:15:11.332 "num_base_bdevs_operational": 2, 00:15:11.332 "process": { 00:15:11.332 "type": "rebuild", 00:15:11.332 "target": "spare", 00:15:11.332 "progress": { 00:15:11.332 "blocks": 5632, 00:15:11.332 "percent": 70 00:15:11.332 } 00:15:11.332 }, 00:15:11.332 "base_bdevs_list": [ 00:15:11.332 { 00:15:11.332 "name": "spare", 00:15:11.332 "uuid": "56a3535e-0a44-549e-81c8-f41a8f90cefe", 00:15:11.332 "is_configured": true, 00:15:11.332 "data_offset": 256, 00:15:11.332 "data_size": 7936 00:15:11.332 }, 00:15:11.332 { 00:15:11.332 "name": "BaseBdev2", 00:15:11.332 "uuid": "17ed819b-559e-5a90-8231-6d33e0a2db73", 00:15:11.332 "is_configured": true, 00:15:11.332 "data_offset": 256, 00:15:11.332 "data_size": 7936 00:15:11.332 } 00:15:11.332 ] 00:15:11.332 }' 00:15:11.332 18:46:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:11.332 18:46:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:11.332 18:46:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:11.332 18:46:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:11.332 18:46:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:11.902 [2024-12-15 18:46:12.201618] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:11.902 [2024-12-15 18:46:12.201695] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:11.902 [2024-12-15 18:46:12.201788] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:12.162 18:46:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:12.162 18:46:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:12.162 18:46:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:12.162 18:46:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:12.162 18:46:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:12.162 18:46:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:12.162 18:46:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.162 18:46:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.162 18:46:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:12.162 18:46:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:12.162 18:46:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.162 18:46:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:12.162 "name": "raid_bdev1", 00:15:12.162 "uuid": "0f0700bc-3a81-49ea-ad22-a6f77524a97a", 00:15:12.162 "strip_size_kb": 0, 00:15:12.162 "state": "online", 00:15:12.162 "raid_level": "raid1", 00:15:12.162 "superblock": true, 00:15:12.162 "num_base_bdevs": 2, 00:15:12.162 "num_base_bdevs_discovered": 2, 00:15:12.162 "num_base_bdevs_operational": 2, 00:15:12.162 "base_bdevs_list": [ 00:15:12.162 { 00:15:12.162 "name": "spare", 00:15:12.162 "uuid": "56a3535e-0a44-549e-81c8-f41a8f90cefe", 00:15:12.162 "is_configured": true, 00:15:12.162 "data_offset": 256, 00:15:12.162 "data_size": 7936 00:15:12.162 }, 00:15:12.162 { 00:15:12.162 "name": "BaseBdev2", 00:15:12.162 "uuid": "17ed819b-559e-5a90-8231-6d33e0a2db73", 00:15:12.162 "is_configured": true, 00:15:12.162 "data_offset": 256, 00:15:12.162 "data_size": 7936 00:15:12.162 } 00:15:12.162 ] 00:15:12.162 }' 00:15:12.162 18:46:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:12.422 18:46:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:12.422 18:46:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:12.422 18:46:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:12.422 18:46:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@709 -- # break 00:15:12.422 18:46:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:12.422 18:46:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:12.422 18:46:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:12.422 18:46:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:12.422 18:46:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:12.422 18:46:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.422 18:46:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.422 18:46:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:12.422 18:46:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:12.422 18:46:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.422 18:46:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:12.422 "name": "raid_bdev1", 00:15:12.422 "uuid": "0f0700bc-3a81-49ea-ad22-a6f77524a97a", 00:15:12.422 "strip_size_kb": 0, 00:15:12.422 "state": "online", 00:15:12.422 "raid_level": "raid1", 00:15:12.422 "superblock": true, 00:15:12.422 "num_base_bdevs": 2, 00:15:12.422 "num_base_bdevs_discovered": 2, 00:15:12.422 "num_base_bdevs_operational": 2, 00:15:12.422 "base_bdevs_list": [ 00:15:12.422 { 00:15:12.422 "name": "spare", 00:15:12.422 "uuid": "56a3535e-0a44-549e-81c8-f41a8f90cefe", 00:15:12.422 "is_configured": true, 00:15:12.422 "data_offset": 256, 00:15:12.422 "data_size": 7936 00:15:12.422 }, 00:15:12.422 { 00:15:12.422 "name": "BaseBdev2", 00:15:12.422 "uuid": "17ed819b-559e-5a90-8231-6d33e0a2db73", 00:15:12.422 "is_configured": true, 00:15:12.422 "data_offset": 256, 00:15:12.422 "data_size": 7936 00:15:12.422 } 00:15:12.422 ] 00:15:12.422 }' 00:15:12.422 18:46:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:12.422 18:46:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:12.422 18:46:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:12.422 18:46:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:12.422 18:46:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:12.422 18:46:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:12.422 18:46:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:12.422 18:46:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:12.422 18:46:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:12.422 18:46:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:12.422 18:46:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:12.422 18:46:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:12.422 18:46:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:12.422 18:46:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:12.422 18:46:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.422 18:46:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:12.422 18:46:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.422 18:46:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:12.422 18:46:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.682 18:46:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:12.682 "name": "raid_bdev1", 00:15:12.682 "uuid": "0f0700bc-3a81-49ea-ad22-a6f77524a97a", 00:15:12.682 "strip_size_kb": 0, 00:15:12.682 "state": "online", 00:15:12.682 "raid_level": "raid1", 00:15:12.682 "superblock": true, 00:15:12.682 "num_base_bdevs": 2, 00:15:12.682 "num_base_bdevs_discovered": 2, 00:15:12.682 "num_base_bdevs_operational": 2, 00:15:12.682 "base_bdevs_list": [ 00:15:12.682 { 00:15:12.682 "name": "spare", 00:15:12.682 "uuid": "56a3535e-0a44-549e-81c8-f41a8f90cefe", 00:15:12.682 "is_configured": true, 00:15:12.682 "data_offset": 256, 00:15:12.682 "data_size": 7936 00:15:12.682 }, 00:15:12.682 { 00:15:12.682 "name": "BaseBdev2", 00:15:12.682 "uuid": "17ed819b-559e-5a90-8231-6d33e0a2db73", 00:15:12.682 "is_configured": true, 00:15:12.682 "data_offset": 256, 00:15:12.682 "data_size": 7936 00:15:12.682 } 00:15:12.682 ] 00:15:12.682 }' 00:15:12.682 18:46:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:12.682 18:46:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:12.942 18:46:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:12.942 18:46:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.942 18:46:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:12.942 [2024-12-15 18:46:13.256412] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:12.942 [2024-12-15 18:46:13.256493] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:12.942 [2024-12-15 18:46:13.256598] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:12.942 [2024-12-15 18:46:13.256671] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:12.942 [2024-12-15 18:46:13.256705] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:15:12.942 18:46:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.942 18:46:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.942 18:46:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # jq length 00:15:12.942 18:46:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.942 18:46:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:12.942 18:46:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.942 18:46:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:12.942 18:46:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:12.942 18:46:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:15:12.942 18:46:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:15:12.942 18:46:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:12.942 18:46:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:15:12.942 18:46:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:12.942 18:46:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:12.942 18:46:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:12.942 18:46:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:15:12.942 18:46:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:12.942 18:46:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:12.942 18:46:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:15:13.202 /dev/nbd0 00:15:13.202 18:46:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:13.202 18:46:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:13.202 18:46:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:13.202 18:46:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:15:13.202 18:46:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:13.202 18:46:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:13.202 18:46:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:13.202 18:46:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:15:13.202 18:46:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:13.202 18:46:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:13.202 18:46:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:13.202 1+0 records in 00:15:13.202 1+0 records out 00:15:13.202 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000326994 s, 12.5 MB/s 00:15:13.202 18:46:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:13.202 18:46:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:15:13.202 18:46:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:13.202 18:46:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:13.202 18:46:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:15:13.202 18:46:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:13.202 18:46:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:13.202 18:46:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:15:13.462 /dev/nbd1 00:15:13.462 18:46:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:13.462 18:46:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:13.462 18:46:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:13.462 18:46:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:15:13.462 18:46:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:13.462 18:46:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:13.462 18:46:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:13.462 18:46:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:15:13.462 18:46:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:13.462 18:46:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:13.462 18:46:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:13.462 1+0 records in 00:15:13.462 1+0 records out 00:15:13.462 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000461881 s, 8.9 MB/s 00:15:13.462 18:46:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:13.462 18:46:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:15:13.462 18:46:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:13.462 18:46:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:13.462 18:46:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:15:13.462 18:46:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:13.462 18:46:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:13.462 18:46:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:15:13.462 18:46:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:15:13.462 18:46:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:13.462 18:46:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:13.462 18:46:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:13.462 18:46:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:15:13.462 18:46:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:13.462 18:46:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:13.722 18:46:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:13.722 18:46:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:13.722 18:46:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:13.722 18:46:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:13.722 18:46:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:13.722 18:46:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:13.722 18:46:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:15:13.722 18:46:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:15:13.722 18:46:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:13.722 18:46:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:13.982 18:46:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:13.982 18:46:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:13.982 18:46:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:13.982 18:46:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:13.982 18:46:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:13.982 18:46:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:13.982 18:46:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:15:13.982 18:46:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:15:13.982 18:46:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:15:13.982 18:46:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:15:13.982 18:46:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.982 18:46:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:13.982 18:46:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.982 18:46:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:13.982 18:46:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.982 18:46:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:13.982 [2024-12-15 18:46:14.296944] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:13.982 [2024-12-15 18:46:14.297009] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:13.982 [2024-12-15 18:46:14.297031] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:15:13.982 [2024-12-15 18:46:14.297045] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:13.983 [2024-12-15 18:46:14.299173] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:13.983 [2024-12-15 18:46:14.299212] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:13.983 [2024-12-15 18:46:14.299290] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:13.983 [2024-12-15 18:46:14.299329] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:13.983 [2024-12-15 18:46:14.299442] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:13.983 spare 00:15:13.983 18:46:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.983 18:46:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:15:13.983 18:46:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.983 18:46:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:13.983 [2024-12-15 18:46:14.399340] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:15:13.983 [2024-12-15 18:46:14.399365] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:13.983 [2024-12-15 18:46:14.399611] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c19b0 00:15:13.983 [2024-12-15 18:46:14.399748] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:15:13.983 [2024-12-15 18:46:14.399761] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006600 00:15:13.983 [2024-12-15 18:46:14.399902] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:13.983 18:46:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.983 18:46:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:13.983 18:46:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:13.983 18:46:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:13.983 18:46:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:13.983 18:46:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:13.983 18:46:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:13.983 18:46:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:13.983 18:46:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:13.983 18:46:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:13.983 18:46:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:13.983 18:46:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.983 18:46:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:13.983 18:46:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.983 18:46:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:14.242 18:46:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.242 18:46:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:14.242 "name": "raid_bdev1", 00:15:14.242 "uuid": "0f0700bc-3a81-49ea-ad22-a6f77524a97a", 00:15:14.242 "strip_size_kb": 0, 00:15:14.242 "state": "online", 00:15:14.242 "raid_level": "raid1", 00:15:14.242 "superblock": true, 00:15:14.242 "num_base_bdevs": 2, 00:15:14.242 "num_base_bdevs_discovered": 2, 00:15:14.242 "num_base_bdevs_operational": 2, 00:15:14.242 "base_bdevs_list": [ 00:15:14.242 { 00:15:14.242 "name": "spare", 00:15:14.242 "uuid": "56a3535e-0a44-549e-81c8-f41a8f90cefe", 00:15:14.242 "is_configured": true, 00:15:14.242 "data_offset": 256, 00:15:14.242 "data_size": 7936 00:15:14.242 }, 00:15:14.242 { 00:15:14.242 "name": "BaseBdev2", 00:15:14.243 "uuid": "17ed819b-559e-5a90-8231-6d33e0a2db73", 00:15:14.243 "is_configured": true, 00:15:14.243 "data_offset": 256, 00:15:14.243 "data_size": 7936 00:15:14.243 } 00:15:14.243 ] 00:15:14.243 }' 00:15:14.243 18:46:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:14.243 18:46:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:14.502 18:46:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:14.502 18:46:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:14.502 18:46:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:14.502 18:46:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:14.502 18:46:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:14.502 18:46:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.502 18:46:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.502 18:46:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:14.502 18:46:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:14.502 18:46:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.502 18:46:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:14.502 "name": "raid_bdev1", 00:15:14.502 "uuid": "0f0700bc-3a81-49ea-ad22-a6f77524a97a", 00:15:14.502 "strip_size_kb": 0, 00:15:14.502 "state": "online", 00:15:14.502 "raid_level": "raid1", 00:15:14.502 "superblock": true, 00:15:14.502 "num_base_bdevs": 2, 00:15:14.502 "num_base_bdevs_discovered": 2, 00:15:14.502 "num_base_bdevs_operational": 2, 00:15:14.502 "base_bdevs_list": [ 00:15:14.502 { 00:15:14.502 "name": "spare", 00:15:14.502 "uuid": "56a3535e-0a44-549e-81c8-f41a8f90cefe", 00:15:14.502 "is_configured": true, 00:15:14.502 "data_offset": 256, 00:15:14.502 "data_size": 7936 00:15:14.502 }, 00:15:14.502 { 00:15:14.502 "name": "BaseBdev2", 00:15:14.502 "uuid": "17ed819b-559e-5a90-8231-6d33e0a2db73", 00:15:14.502 "is_configured": true, 00:15:14.502 "data_offset": 256, 00:15:14.502 "data_size": 7936 00:15:14.502 } 00:15:14.502 ] 00:15:14.502 }' 00:15:14.502 18:46:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:14.772 18:46:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:14.772 18:46:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:14.772 18:46:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:14.772 18:46:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.772 18:46:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:15:14.772 18:46:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.772 18:46:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:14.772 18:46:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.772 18:46:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:15:14.772 18:46:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:14.772 18:46:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.772 18:46:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:14.772 [2024-12-15 18:46:15.055956] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:14.772 18:46:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.772 18:46:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:14.772 18:46:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:14.772 18:46:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:14.772 18:46:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:14.772 18:46:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:14.772 18:46:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:14.772 18:46:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:14.772 18:46:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:14.772 18:46:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:14.772 18:46:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:14.773 18:46:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:14.773 18:46:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.773 18:46:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.773 18:46:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:14.773 18:46:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.773 18:46:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:14.773 "name": "raid_bdev1", 00:15:14.773 "uuid": "0f0700bc-3a81-49ea-ad22-a6f77524a97a", 00:15:14.773 "strip_size_kb": 0, 00:15:14.773 "state": "online", 00:15:14.773 "raid_level": "raid1", 00:15:14.773 "superblock": true, 00:15:14.773 "num_base_bdevs": 2, 00:15:14.773 "num_base_bdevs_discovered": 1, 00:15:14.773 "num_base_bdevs_operational": 1, 00:15:14.773 "base_bdevs_list": [ 00:15:14.773 { 00:15:14.773 "name": null, 00:15:14.773 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:14.773 "is_configured": false, 00:15:14.773 "data_offset": 0, 00:15:14.773 "data_size": 7936 00:15:14.773 }, 00:15:14.773 { 00:15:14.773 "name": "BaseBdev2", 00:15:14.773 "uuid": "17ed819b-559e-5a90-8231-6d33e0a2db73", 00:15:14.773 "is_configured": true, 00:15:14.773 "data_offset": 256, 00:15:14.773 "data_size": 7936 00:15:14.773 } 00:15:14.773 ] 00:15:14.773 }' 00:15:14.773 18:46:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:14.773 18:46:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:15.351 18:46:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:15.351 18:46:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.351 18:46:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:15.351 [2024-12-15 18:46:15.547175] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:15.351 [2024-12-15 18:46:15.547362] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:15.351 [2024-12-15 18:46:15.547421] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:15.351 [2024-12-15 18:46:15.547479] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:15.351 [2024-12-15 18:46:15.552273] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1a80 00:15:15.351 18:46:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.351 18:46:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@757 -- # sleep 1 00:15:15.351 [2024-12-15 18:46:15.554148] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:16.290 18:46:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:16.290 18:46:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:16.290 18:46:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:16.291 18:46:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:16.291 18:46:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:16.291 18:46:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:16.291 18:46:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:16.291 18:46:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.291 18:46:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:16.291 18:46:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.291 18:46:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:16.291 "name": "raid_bdev1", 00:15:16.291 "uuid": "0f0700bc-3a81-49ea-ad22-a6f77524a97a", 00:15:16.291 "strip_size_kb": 0, 00:15:16.291 "state": "online", 00:15:16.291 "raid_level": "raid1", 00:15:16.291 "superblock": true, 00:15:16.291 "num_base_bdevs": 2, 00:15:16.291 "num_base_bdevs_discovered": 2, 00:15:16.291 "num_base_bdevs_operational": 2, 00:15:16.291 "process": { 00:15:16.291 "type": "rebuild", 00:15:16.291 "target": "spare", 00:15:16.291 "progress": { 00:15:16.291 "blocks": 2560, 00:15:16.291 "percent": 32 00:15:16.291 } 00:15:16.291 }, 00:15:16.291 "base_bdevs_list": [ 00:15:16.291 { 00:15:16.291 "name": "spare", 00:15:16.291 "uuid": "56a3535e-0a44-549e-81c8-f41a8f90cefe", 00:15:16.291 "is_configured": true, 00:15:16.291 "data_offset": 256, 00:15:16.291 "data_size": 7936 00:15:16.291 }, 00:15:16.291 { 00:15:16.291 "name": "BaseBdev2", 00:15:16.291 "uuid": "17ed819b-559e-5a90-8231-6d33e0a2db73", 00:15:16.291 "is_configured": true, 00:15:16.291 "data_offset": 256, 00:15:16.291 "data_size": 7936 00:15:16.291 } 00:15:16.291 ] 00:15:16.291 }' 00:15:16.291 18:46:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:16.291 18:46:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:16.291 18:46:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:16.291 18:46:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:16.291 18:46:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:15:16.291 18:46:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.291 18:46:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:16.291 [2024-12-15 18:46:16.706303] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:16.551 [2024-12-15 18:46:16.758174] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:16.551 [2024-12-15 18:46:16.758263] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:16.551 [2024-12-15 18:46:16.758281] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:16.551 [2024-12-15 18:46:16.758288] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:16.551 18:46:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.551 18:46:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:16.551 18:46:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:16.551 18:46:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:16.551 18:46:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:16.551 18:46:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:16.551 18:46:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:16.551 18:46:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:16.551 18:46:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:16.551 18:46:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:16.551 18:46:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:16.551 18:46:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:16.551 18:46:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.551 18:46:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:16.551 18:46:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:16.551 18:46:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.551 18:46:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:16.551 "name": "raid_bdev1", 00:15:16.551 "uuid": "0f0700bc-3a81-49ea-ad22-a6f77524a97a", 00:15:16.551 "strip_size_kb": 0, 00:15:16.551 "state": "online", 00:15:16.551 "raid_level": "raid1", 00:15:16.551 "superblock": true, 00:15:16.551 "num_base_bdevs": 2, 00:15:16.551 "num_base_bdevs_discovered": 1, 00:15:16.551 "num_base_bdevs_operational": 1, 00:15:16.551 "base_bdevs_list": [ 00:15:16.551 { 00:15:16.551 "name": null, 00:15:16.551 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:16.551 "is_configured": false, 00:15:16.551 "data_offset": 0, 00:15:16.551 "data_size": 7936 00:15:16.551 }, 00:15:16.551 { 00:15:16.551 "name": "BaseBdev2", 00:15:16.551 "uuid": "17ed819b-559e-5a90-8231-6d33e0a2db73", 00:15:16.551 "is_configured": true, 00:15:16.551 "data_offset": 256, 00:15:16.551 "data_size": 7936 00:15:16.551 } 00:15:16.551 ] 00:15:16.551 }' 00:15:16.551 18:46:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:16.551 18:46:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:16.811 18:46:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:16.811 18:46:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.811 18:46:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:16.811 [2024-12-15 18:46:17.213698] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:16.811 [2024-12-15 18:46:17.213811] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:16.811 [2024-12-15 18:46:17.213855] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:15:16.811 [2024-12-15 18:46:17.213887] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:16.811 [2024-12-15 18:46:17.214313] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:16.811 [2024-12-15 18:46:17.214370] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:16.811 [2024-12-15 18:46:17.214463] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:16.811 [2024-12-15 18:46:17.214490] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:16.811 [2024-12-15 18:46:17.214524] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:16.811 [2024-12-15 18:46:17.214555] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:16.811 [2024-12-15 18:46:17.218562] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:15:16.811 spare 00:15:16.811 18:46:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.811 18:46:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@764 -- # sleep 1 00:15:16.811 [2024-12-15 18:46:17.220415] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:18.192 18:46:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:18.192 18:46:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:18.192 18:46:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:18.192 18:46:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:18.192 18:46:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:18.192 18:46:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.192 18:46:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:18.192 18:46:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.192 18:46:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:18.192 18:46:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.192 18:46:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:18.192 "name": "raid_bdev1", 00:15:18.192 "uuid": "0f0700bc-3a81-49ea-ad22-a6f77524a97a", 00:15:18.192 "strip_size_kb": 0, 00:15:18.192 "state": "online", 00:15:18.192 "raid_level": "raid1", 00:15:18.192 "superblock": true, 00:15:18.192 "num_base_bdevs": 2, 00:15:18.192 "num_base_bdevs_discovered": 2, 00:15:18.192 "num_base_bdevs_operational": 2, 00:15:18.192 "process": { 00:15:18.192 "type": "rebuild", 00:15:18.192 "target": "spare", 00:15:18.192 "progress": { 00:15:18.192 "blocks": 2560, 00:15:18.192 "percent": 32 00:15:18.192 } 00:15:18.192 }, 00:15:18.192 "base_bdevs_list": [ 00:15:18.192 { 00:15:18.192 "name": "spare", 00:15:18.192 "uuid": "56a3535e-0a44-549e-81c8-f41a8f90cefe", 00:15:18.192 "is_configured": true, 00:15:18.192 "data_offset": 256, 00:15:18.192 "data_size": 7936 00:15:18.192 }, 00:15:18.192 { 00:15:18.192 "name": "BaseBdev2", 00:15:18.192 "uuid": "17ed819b-559e-5a90-8231-6d33e0a2db73", 00:15:18.192 "is_configured": true, 00:15:18.192 "data_offset": 256, 00:15:18.192 "data_size": 7936 00:15:18.192 } 00:15:18.192 ] 00:15:18.192 }' 00:15:18.192 18:46:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:18.192 18:46:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:18.193 18:46:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:18.193 18:46:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:18.193 18:46:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:15:18.193 18:46:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.193 18:46:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:18.193 [2024-12-15 18:46:18.340602] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:18.193 [2024-12-15 18:46:18.424320] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:18.193 [2024-12-15 18:46:18.424387] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:18.193 [2024-12-15 18:46:18.424401] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:18.193 [2024-12-15 18:46:18.424409] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:18.193 18:46:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.193 18:46:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:18.193 18:46:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:18.193 18:46:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:18.193 18:46:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:18.193 18:46:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:18.193 18:46:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:18.193 18:46:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:18.193 18:46:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:18.193 18:46:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:18.193 18:46:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:18.193 18:46:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.193 18:46:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:18.193 18:46:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.193 18:46:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:18.193 18:46:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.193 18:46:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:18.193 "name": "raid_bdev1", 00:15:18.193 "uuid": "0f0700bc-3a81-49ea-ad22-a6f77524a97a", 00:15:18.193 "strip_size_kb": 0, 00:15:18.193 "state": "online", 00:15:18.193 "raid_level": "raid1", 00:15:18.193 "superblock": true, 00:15:18.193 "num_base_bdevs": 2, 00:15:18.193 "num_base_bdevs_discovered": 1, 00:15:18.193 "num_base_bdevs_operational": 1, 00:15:18.193 "base_bdevs_list": [ 00:15:18.193 { 00:15:18.193 "name": null, 00:15:18.193 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:18.193 "is_configured": false, 00:15:18.193 "data_offset": 0, 00:15:18.193 "data_size": 7936 00:15:18.193 }, 00:15:18.193 { 00:15:18.193 "name": "BaseBdev2", 00:15:18.193 "uuid": "17ed819b-559e-5a90-8231-6d33e0a2db73", 00:15:18.193 "is_configured": true, 00:15:18.193 "data_offset": 256, 00:15:18.193 "data_size": 7936 00:15:18.193 } 00:15:18.193 ] 00:15:18.193 }' 00:15:18.193 18:46:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:18.193 18:46:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:18.452 18:46:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:18.452 18:46:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:18.452 18:46:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:18.452 18:46:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:18.452 18:46:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:18.452 18:46:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.452 18:46:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:18.712 18:46:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.712 18:46:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:18.712 18:46:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.712 18:46:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:18.712 "name": "raid_bdev1", 00:15:18.712 "uuid": "0f0700bc-3a81-49ea-ad22-a6f77524a97a", 00:15:18.712 "strip_size_kb": 0, 00:15:18.712 "state": "online", 00:15:18.712 "raid_level": "raid1", 00:15:18.712 "superblock": true, 00:15:18.712 "num_base_bdevs": 2, 00:15:18.712 "num_base_bdevs_discovered": 1, 00:15:18.712 "num_base_bdevs_operational": 1, 00:15:18.712 "base_bdevs_list": [ 00:15:18.712 { 00:15:18.712 "name": null, 00:15:18.712 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:18.712 "is_configured": false, 00:15:18.712 "data_offset": 0, 00:15:18.712 "data_size": 7936 00:15:18.712 }, 00:15:18.712 { 00:15:18.712 "name": "BaseBdev2", 00:15:18.712 "uuid": "17ed819b-559e-5a90-8231-6d33e0a2db73", 00:15:18.712 "is_configured": true, 00:15:18.712 "data_offset": 256, 00:15:18.712 "data_size": 7936 00:15:18.712 } 00:15:18.712 ] 00:15:18.712 }' 00:15:18.712 18:46:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:18.712 18:46:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:18.712 18:46:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:18.712 18:46:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:18.712 18:46:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:15:18.712 18:46:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.712 18:46:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:18.712 18:46:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.712 18:46:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:18.712 18:46:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.712 18:46:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:18.712 [2024-12-15 18:46:19.055438] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:18.712 [2024-12-15 18:46:19.055492] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:18.712 [2024-12-15 18:46:19.055509] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:15:18.712 [2024-12-15 18:46:19.055519] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:18.712 [2024-12-15 18:46:19.055901] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:18.712 [2024-12-15 18:46:19.055923] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:18.712 [2024-12-15 18:46:19.055988] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:15:18.712 [2024-12-15 18:46:19.056006] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:18.712 [2024-12-15 18:46:19.056018] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:18.712 [2024-12-15 18:46:19.056030] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:15:18.712 BaseBdev1 00:15:18.712 18:46:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.712 18:46:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # sleep 1 00:15:19.650 18:46:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:19.650 18:46:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:19.650 18:46:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:19.650 18:46:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:19.650 18:46:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:19.650 18:46:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:19.650 18:46:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:19.650 18:46:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:19.650 18:46:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:19.650 18:46:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:19.650 18:46:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.650 18:46:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:19.650 18:46:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.650 18:46:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:19.909 18:46:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.909 18:46:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:19.909 "name": "raid_bdev1", 00:15:19.909 "uuid": "0f0700bc-3a81-49ea-ad22-a6f77524a97a", 00:15:19.909 "strip_size_kb": 0, 00:15:19.909 "state": "online", 00:15:19.909 "raid_level": "raid1", 00:15:19.909 "superblock": true, 00:15:19.909 "num_base_bdevs": 2, 00:15:19.909 "num_base_bdevs_discovered": 1, 00:15:19.909 "num_base_bdevs_operational": 1, 00:15:19.909 "base_bdevs_list": [ 00:15:19.909 { 00:15:19.909 "name": null, 00:15:19.910 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:19.910 "is_configured": false, 00:15:19.910 "data_offset": 0, 00:15:19.910 "data_size": 7936 00:15:19.910 }, 00:15:19.910 { 00:15:19.910 "name": "BaseBdev2", 00:15:19.910 "uuid": "17ed819b-559e-5a90-8231-6d33e0a2db73", 00:15:19.910 "is_configured": true, 00:15:19.910 "data_offset": 256, 00:15:19.910 "data_size": 7936 00:15:19.910 } 00:15:19.910 ] 00:15:19.910 }' 00:15:19.910 18:46:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:19.910 18:46:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:20.169 18:46:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:20.169 18:46:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:20.169 18:46:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:20.169 18:46:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:20.169 18:46:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:20.169 18:46:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:20.169 18:46:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.169 18:46:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.169 18:46:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:20.169 18:46:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.169 18:46:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:20.169 "name": "raid_bdev1", 00:15:20.169 "uuid": "0f0700bc-3a81-49ea-ad22-a6f77524a97a", 00:15:20.169 "strip_size_kb": 0, 00:15:20.169 "state": "online", 00:15:20.169 "raid_level": "raid1", 00:15:20.169 "superblock": true, 00:15:20.169 "num_base_bdevs": 2, 00:15:20.169 "num_base_bdevs_discovered": 1, 00:15:20.169 "num_base_bdevs_operational": 1, 00:15:20.169 "base_bdevs_list": [ 00:15:20.169 { 00:15:20.169 "name": null, 00:15:20.169 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:20.169 "is_configured": false, 00:15:20.169 "data_offset": 0, 00:15:20.169 "data_size": 7936 00:15:20.169 }, 00:15:20.169 { 00:15:20.169 "name": "BaseBdev2", 00:15:20.169 "uuid": "17ed819b-559e-5a90-8231-6d33e0a2db73", 00:15:20.169 "is_configured": true, 00:15:20.169 "data_offset": 256, 00:15:20.169 "data_size": 7936 00:15:20.169 } 00:15:20.169 ] 00:15:20.169 }' 00:15:20.169 18:46:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:20.429 18:46:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:20.429 18:46:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:20.429 18:46:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:20.429 18:46:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:20.429 18:46:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@652 -- # local es=0 00:15:20.429 18:46:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:20.429 18:46:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:20.429 18:46:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:20.429 18:46:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:20.429 18:46:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:20.429 18:46:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:20.429 18:46:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.429 18:46:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:20.429 [2024-12-15 18:46:20.696634] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:20.429 [2024-12-15 18:46:20.696779] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:20.429 [2024-12-15 18:46:20.696793] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:20.429 request: 00:15:20.429 { 00:15:20.429 "base_bdev": "BaseBdev1", 00:15:20.429 "raid_bdev": "raid_bdev1", 00:15:20.429 "method": "bdev_raid_add_base_bdev", 00:15:20.429 "req_id": 1 00:15:20.429 } 00:15:20.429 Got JSON-RPC error response 00:15:20.429 response: 00:15:20.429 { 00:15:20.429 "code": -22, 00:15:20.429 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:15:20.429 } 00:15:20.429 18:46:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:20.429 18:46:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # es=1 00:15:20.429 18:46:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:20.429 18:46:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:20.429 18:46:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:20.429 18:46:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # sleep 1 00:15:21.368 18:46:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:21.368 18:46:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:21.368 18:46:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:21.368 18:46:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:21.368 18:46:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:21.368 18:46:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:21.368 18:46:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:21.368 18:46:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:21.368 18:46:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:21.368 18:46:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:21.368 18:46:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:21.368 18:46:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:21.368 18:46:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.368 18:46:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:21.368 18:46:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.368 18:46:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:21.368 "name": "raid_bdev1", 00:15:21.368 "uuid": "0f0700bc-3a81-49ea-ad22-a6f77524a97a", 00:15:21.368 "strip_size_kb": 0, 00:15:21.368 "state": "online", 00:15:21.368 "raid_level": "raid1", 00:15:21.368 "superblock": true, 00:15:21.368 "num_base_bdevs": 2, 00:15:21.368 "num_base_bdevs_discovered": 1, 00:15:21.368 "num_base_bdevs_operational": 1, 00:15:21.368 "base_bdevs_list": [ 00:15:21.368 { 00:15:21.368 "name": null, 00:15:21.368 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:21.368 "is_configured": false, 00:15:21.368 "data_offset": 0, 00:15:21.368 "data_size": 7936 00:15:21.368 }, 00:15:21.368 { 00:15:21.368 "name": "BaseBdev2", 00:15:21.368 "uuid": "17ed819b-559e-5a90-8231-6d33e0a2db73", 00:15:21.368 "is_configured": true, 00:15:21.368 "data_offset": 256, 00:15:21.368 "data_size": 7936 00:15:21.368 } 00:15:21.368 ] 00:15:21.368 }' 00:15:21.368 18:46:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:21.368 18:46:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:21.937 18:46:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:21.937 18:46:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:21.937 18:46:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:21.937 18:46:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:21.937 18:46:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:21.937 18:46:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:21.937 18:46:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.937 18:46:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:21.937 18:46:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:21.937 18:46:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.937 18:46:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:21.937 "name": "raid_bdev1", 00:15:21.937 "uuid": "0f0700bc-3a81-49ea-ad22-a6f77524a97a", 00:15:21.937 "strip_size_kb": 0, 00:15:21.937 "state": "online", 00:15:21.937 "raid_level": "raid1", 00:15:21.937 "superblock": true, 00:15:21.937 "num_base_bdevs": 2, 00:15:21.937 "num_base_bdevs_discovered": 1, 00:15:21.937 "num_base_bdevs_operational": 1, 00:15:21.937 "base_bdevs_list": [ 00:15:21.937 { 00:15:21.937 "name": null, 00:15:21.937 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:21.937 "is_configured": false, 00:15:21.937 "data_offset": 0, 00:15:21.937 "data_size": 7936 00:15:21.937 }, 00:15:21.937 { 00:15:21.938 "name": "BaseBdev2", 00:15:21.938 "uuid": "17ed819b-559e-5a90-8231-6d33e0a2db73", 00:15:21.938 "is_configured": true, 00:15:21.938 "data_offset": 256, 00:15:21.938 "data_size": 7936 00:15:21.938 } 00:15:21.938 ] 00:15:21.938 }' 00:15:21.938 18:46:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:21.938 18:46:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:21.938 18:46:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:21.938 18:46:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:21.938 18:46:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # killprocess 98789 00:15:21.938 18:46:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 98789 ']' 00:15:21.938 18:46:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 98789 00:15:21.938 18:46:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:15:21.938 18:46:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:21.938 18:46:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 98789 00:15:21.938 killing process with pid 98789 00:15:21.938 Received shutdown signal, test time was about 60.000000 seconds 00:15:21.938 00:15:21.938 Latency(us) 00:15:21.938 [2024-12-15T18:46:22.379Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:21.938 [2024-12-15T18:46:22.379Z] =================================================================================================================== 00:15:21.938 [2024-12-15T18:46:22.379Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:21.938 18:46:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:21.938 18:46:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:21.938 18:46:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 98789' 00:15:21.938 18:46:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@973 -- # kill 98789 00:15:21.938 [2024-12-15 18:46:22.341214] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:21.938 [2024-12-15 18:46:22.341321] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:21.938 [2024-12-15 18:46:22.341369] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:21.938 [2024-12-15 18:46:22.341377] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state offline 00:15:21.938 18:46:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@978 -- # wait 98789 00:15:21.938 [2024-12-15 18:46:22.372500] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:22.198 18:46:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@786 -- # return 0 00:15:22.198 ************************************ 00:15:22.198 END TEST raid_rebuild_test_sb_4k 00:15:22.198 ************************************ 00:15:22.198 00:15:22.198 real 0m18.440s 00:15:22.198 user 0m24.486s 00:15:22.198 sys 0m2.699s 00:15:22.198 18:46:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:22.198 18:46:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:22.459 18:46:22 bdev_raid -- bdev/bdev_raid.sh@1003 -- # base_malloc_params='-m 32' 00:15:22.459 18:46:22 bdev_raid -- bdev/bdev_raid.sh@1004 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:15:22.459 18:46:22 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:22.459 18:46:22 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:22.459 18:46:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:22.459 ************************************ 00:15:22.459 START TEST raid_state_function_test_sb_md_separate 00:15:22.459 ************************************ 00:15:22.459 18:46:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:15:22.459 18:46:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:15:22.459 18:46:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:15:22.459 18:46:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:15:22.459 18:46:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:22.459 18:46:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:22.459 18:46:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:22.459 18:46:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:22.459 18:46:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:22.459 18:46:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:22.459 18:46:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:22.459 18:46:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:22.459 18:46:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:22.459 18:46:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:22.459 18:46:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:22.459 18:46:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:22.459 18:46:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:22.459 18:46:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:22.459 18:46:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:22.459 18:46:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:15:22.459 18:46:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:15:22.459 18:46:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:15:22.459 18:46:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:15:22.459 Process raid pid: 99467 00:15:22.459 18:46:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@229 -- # raid_pid=99467 00:15:22.459 18:46:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:22.459 18:46:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 99467' 00:15:22.459 18:46:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@231 -- # waitforlisten 99467 00:15:22.459 18:46:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 99467 ']' 00:15:22.459 18:46:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:22.459 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:22.459 18:46:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:22.459 18:46:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:22.459 18:46:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:22.459 18:46:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:22.459 [2024-12-15 18:46:22.751583] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:15:22.459 [2024-12-15 18:46:22.751706] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:22.719 [2024-12-15 18:46:22.921937] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:22.719 [2024-12-15 18:46:22.948066] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:22.719 [2024-12-15 18:46:22.991210] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:22.719 [2024-12-15 18:46:22.991242] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:23.288 18:46:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:23.288 18:46:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:15:23.288 18:46:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:15:23.288 18:46:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.288 18:46:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:23.288 [2024-12-15 18:46:23.574512] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:23.288 [2024-12-15 18:46:23.574636] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:23.288 [2024-12-15 18:46:23.574657] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:23.288 [2024-12-15 18:46:23.574668] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:23.288 18:46:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.288 18:46:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:23.288 18:46:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:23.288 18:46:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:23.288 18:46:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:23.288 18:46:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:23.288 18:46:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:23.288 18:46:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:23.288 18:46:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:23.288 18:46:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:23.288 18:46:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:23.288 18:46:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:23.288 18:46:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:23.288 18:46:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.288 18:46:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:23.288 18:46:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.288 18:46:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:23.288 "name": "Existed_Raid", 00:15:23.288 "uuid": "9c1616af-21da-46d1-ba71-1cad70c888eb", 00:15:23.288 "strip_size_kb": 0, 00:15:23.288 "state": "configuring", 00:15:23.288 "raid_level": "raid1", 00:15:23.288 "superblock": true, 00:15:23.288 "num_base_bdevs": 2, 00:15:23.288 "num_base_bdevs_discovered": 0, 00:15:23.288 "num_base_bdevs_operational": 2, 00:15:23.288 "base_bdevs_list": [ 00:15:23.288 { 00:15:23.288 "name": "BaseBdev1", 00:15:23.288 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.288 "is_configured": false, 00:15:23.288 "data_offset": 0, 00:15:23.288 "data_size": 0 00:15:23.288 }, 00:15:23.288 { 00:15:23.288 "name": "BaseBdev2", 00:15:23.288 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.288 "is_configured": false, 00:15:23.288 "data_offset": 0, 00:15:23.288 "data_size": 0 00:15:23.288 } 00:15:23.288 ] 00:15:23.288 }' 00:15:23.288 18:46:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:23.288 18:46:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:23.858 18:46:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:23.858 18:46:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.858 18:46:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:23.858 [2024-12-15 18:46:24.045640] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:23.858 [2024-12-15 18:46:24.045723] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:15:23.858 18:46:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.858 18:46:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:15:23.858 18:46:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.858 18:46:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:23.858 [2024-12-15 18:46:24.057627] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:23.858 [2024-12-15 18:46:24.057708] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:23.858 [2024-12-15 18:46:24.057734] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:23.858 [2024-12-15 18:46:24.057755] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:23.858 18:46:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.859 18:46:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:15:23.859 18:46:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.859 18:46:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:23.859 [2024-12-15 18:46:24.079137] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:23.859 BaseBdev1 00:15:23.859 18:46:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.859 18:46:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:23.859 18:46:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:23.859 18:46:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:23.859 18:46:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:15:23.859 18:46:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:23.859 18:46:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:23.859 18:46:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:23.859 18:46:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.859 18:46:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:23.859 18:46:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.859 18:46:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:23.859 18:46:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.859 18:46:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:23.859 [ 00:15:23.859 { 00:15:23.859 "name": "BaseBdev1", 00:15:23.859 "aliases": [ 00:15:23.859 "0b58a785-51d4-4093-a535-426e7ac9aaba" 00:15:23.859 ], 00:15:23.859 "product_name": "Malloc disk", 00:15:23.859 "block_size": 4096, 00:15:23.859 "num_blocks": 8192, 00:15:23.859 "uuid": "0b58a785-51d4-4093-a535-426e7ac9aaba", 00:15:23.859 "md_size": 32, 00:15:23.859 "md_interleave": false, 00:15:23.859 "dif_type": 0, 00:15:23.859 "assigned_rate_limits": { 00:15:23.859 "rw_ios_per_sec": 0, 00:15:23.859 "rw_mbytes_per_sec": 0, 00:15:23.859 "r_mbytes_per_sec": 0, 00:15:23.859 "w_mbytes_per_sec": 0 00:15:23.859 }, 00:15:23.859 "claimed": true, 00:15:23.859 "claim_type": "exclusive_write", 00:15:23.859 "zoned": false, 00:15:23.859 "supported_io_types": { 00:15:23.859 "read": true, 00:15:23.859 "write": true, 00:15:23.859 "unmap": true, 00:15:23.859 "flush": true, 00:15:23.859 "reset": true, 00:15:23.859 "nvme_admin": false, 00:15:23.859 "nvme_io": false, 00:15:23.859 "nvme_io_md": false, 00:15:23.859 "write_zeroes": true, 00:15:23.859 "zcopy": true, 00:15:23.859 "get_zone_info": false, 00:15:23.859 "zone_management": false, 00:15:23.859 "zone_append": false, 00:15:23.859 "compare": false, 00:15:23.859 "compare_and_write": false, 00:15:23.859 "abort": true, 00:15:23.859 "seek_hole": false, 00:15:23.859 "seek_data": false, 00:15:23.859 "copy": true, 00:15:23.859 "nvme_iov_md": false 00:15:23.859 }, 00:15:23.859 "memory_domains": [ 00:15:23.859 { 00:15:23.859 "dma_device_id": "system", 00:15:23.859 "dma_device_type": 1 00:15:23.859 }, 00:15:23.859 { 00:15:23.859 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:23.859 "dma_device_type": 2 00:15:23.859 } 00:15:23.859 ], 00:15:23.859 "driver_specific": {} 00:15:23.859 } 00:15:23.859 ] 00:15:23.859 18:46:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.859 18:46:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:15:23.859 18:46:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:23.859 18:46:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:23.859 18:46:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:23.859 18:46:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:23.859 18:46:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:23.859 18:46:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:23.859 18:46:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:23.859 18:46:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:23.859 18:46:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:23.859 18:46:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:23.859 18:46:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:23.859 18:46:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.859 18:46:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:23.859 18:46:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:23.859 18:46:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.859 18:46:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:23.859 "name": "Existed_Raid", 00:15:23.859 "uuid": "f3e62cdf-e8b7-425a-a8d3-b4525a814de3", 00:15:23.859 "strip_size_kb": 0, 00:15:23.859 "state": "configuring", 00:15:23.859 "raid_level": "raid1", 00:15:23.859 "superblock": true, 00:15:23.859 "num_base_bdevs": 2, 00:15:23.859 "num_base_bdevs_discovered": 1, 00:15:23.859 "num_base_bdevs_operational": 2, 00:15:23.859 "base_bdevs_list": [ 00:15:23.859 { 00:15:23.859 "name": "BaseBdev1", 00:15:23.859 "uuid": "0b58a785-51d4-4093-a535-426e7ac9aaba", 00:15:23.859 "is_configured": true, 00:15:23.859 "data_offset": 256, 00:15:23.859 "data_size": 7936 00:15:23.859 }, 00:15:23.859 { 00:15:23.859 "name": "BaseBdev2", 00:15:23.859 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.859 "is_configured": false, 00:15:23.859 "data_offset": 0, 00:15:23.859 "data_size": 0 00:15:23.859 } 00:15:23.859 ] 00:15:23.859 }' 00:15:23.859 18:46:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:23.859 18:46:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:24.429 18:46:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:24.429 18:46:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.429 18:46:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:24.429 [2024-12-15 18:46:24.590306] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:24.429 [2024-12-15 18:46:24.590398] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:15:24.429 18:46:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.429 18:46:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:15:24.429 18:46:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.429 18:46:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:24.429 [2024-12-15 18:46:24.602371] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:24.429 [2024-12-15 18:46:24.604150] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:24.429 [2024-12-15 18:46:24.604192] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:24.429 18:46:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.429 18:46:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:24.429 18:46:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:24.429 18:46:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:24.429 18:46:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:24.429 18:46:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:24.429 18:46:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:24.429 18:46:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:24.429 18:46:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:24.429 18:46:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:24.429 18:46:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:24.429 18:46:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:24.429 18:46:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:24.429 18:46:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.429 18:46:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:24.429 18:46:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.429 18:46:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:24.429 18:46:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.429 18:46:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:24.430 "name": "Existed_Raid", 00:15:24.430 "uuid": "55ed3b5a-510f-4622-9b83-3687c793b53d", 00:15:24.430 "strip_size_kb": 0, 00:15:24.430 "state": "configuring", 00:15:24.430 "raid_level": "raid1", 00:15:24.430 "superblock": true, 00:15:24.430 "num_base_bdevs": 2, 00:15:24.430 "num_base_bdevs_discovered": 1, 00:15:24.430 "num_base_bdevs_operational": 2, 00:15:24.430 "base_bdevs_list": [ 00:15:24.430 { 00:15:24.430 "name": "BaseBdev1", 00:15:24.430 "uuid": "0b58a785-51d4-4093-a535-426e7ac9aaba", 00:15:24.430 "is_configured": true, 00:15:24.430 "data_offset": 256, 00:15:24.430 "data_size": 7936 00:15:24.430 }, 00:15:24.430 { 00:15:24.430 "name": "BaseBdev2", 00:15:24.430 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.430 "is_configured": false, 00:15:24.430 "data_offset": 0, 00:15:24.430 "data_size": 0 00:15:24.430 } 00:15:24.430 ] 00:15:24.430 }' 00:15:24.430 18:46:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:24.430 18:46:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:24.690 18:46:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:15:24.690 18:46:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.690 18:46:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:24.690 [2024-12-15 18:46:25.021211] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:24.690 [2024-12-15 18:46:25.021467] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:15:24.690 [2024-12-15 18:46:25.021507] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:24.690 [2024-12-15 18:46:25.021637] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:15:24.690 [2024-12-15 18:46:25.021790] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:15:24.690 [2024-12-15 18:46:25.021855] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:15:24.690 BaseBdev2 00:15:24.690 [2024-12-15 18:46:25.021997] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:24.690 18:46:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.690 18:46:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:24.690 18:46:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:24.690 18:46:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:24.690 18:46:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:15:24.690 18:46:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:24.690 18:46:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:24.690 18:46:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:24.690 18:46:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.690 18:46:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:24.690 18:46:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.690 18:46:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:24.690 18:46:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.690 18:46:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:24.690 [ 00:15:24.690 { 00:15:24.690 "name": "BaseBdev2", 00:15:24.690 "aliases": [ 00:15:24.690 "dc6f2ee8-aff6-451b-b9a1-fb5483c0acda" 00:15:24.690 ], 00:15:24.690 "product_name": "Malloc disk", 00:15:24.690 "block_size": 4096, 00:15:24.690 "num_blocks": 8192, 00:15:24.690 "uuid": "dc6f2ee8-aff6-451b-b9a1-fb5483c0acda", 00:15:24.690 "md_size": 32, 00:15:24.690 "md_interleave": false, 00:15:24.690 "dif_type": 0, 00:15:24.690 "assigned_rate_limits": { 00:15:24.690 "rw_ios_per_sec": 0, 00:15:24.690 "rw_mbytes_per_sec": 0, 00:15:24.690 "r_mbytes_per_sec": 0, 00:15:24.690 "w_mbytes_per_sec": 0 00:15:24.690 }, 00:15:24.690 "claimed": true, 00:15:24.690 "claim_type": "exclusive_write", 00:15:24.690 "zoned": false, 00:15:24.690 "supported_io_types": { 00:15:24.690 "read": true, 00:15:24.690 "write": true, 00:15:24.690 "unmap": true, 00:15:24.690 "flush": true, 00:15:24.690 "reset": true, 00:15:24.690 "nvme_admin": false, 00:15:24.690 "nvme_io": false, 00:15:24.690 "nvme_io_md": false, 00:15:24.690 "write_zeroes": true, 00:15:24.690 "zcopy": true, 00:15:24.690 "get_zone_info": false, 00:15:24.690 "zone_management": false, 00:15:24.690 "zone_append": false, 00:15:24.690 "compare": false, 00:15:24.690 "compare_and_write": false, 00:15:24.690 "abort": true, 00:15:24.690 "seek_hole": false, 00:15:24.690 "seek_data": false, 00:15:24.690 "copy": true, 00:15:24.690 "nvme_iov_md": false 00:15:24.690 }, 00:15:24.690 "memory_domains": [ 00:15:24.691 { 00:15:24.691 "dma_device_id": "system", 00:15:24.691 "dma_device_type": 1 00:15:24.691 }, 00:15:24.691 { 00:15:24.691 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:24.691 "dma_device_type": 2 00:15:24.691 } 00:15:24.691 ], 00:15:24.691 "driver_specific": {} 00:15:24.691 } 00:15:24.691 ] 00:15:24.691 18:46:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.691 18:46:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:15:24.691 18:46:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:24.691 18:46:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:24.691 18:46:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:15:24.691 18:46:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:24.691 18:46:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:24.691 18:46:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:24.691 18:46:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:24.691 18:46:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:24.691 18:46:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:24.691 18:46:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:24.691 18:46:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:24.691 18:46:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:24.691 18:46:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.691 18:46:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:24.691 18:46:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.691 18:46:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:24.691 18:46:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.691 18:46:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:24.691 "name": "Existed_Raid", 00:15:24.691 "uuid": "55ed3b5a-510f-4622-9b83-3687c793b53d", 00:15:24.691 "strip_size_kb": 0, 00:15:24.691 "state": "online", 00:15:24.691 "raid_level": "raid1", 00:15:24.691 "superblock": true, 00:15:24.691 "num_base_bdevs": 2, 00:15:24.691 "num_base_bdevs_discovered": 2, 00:15:24.691 "num_base_bdevs_operational": 2, 00:15:24.691 "base_bdevs_list": [ 00:15:24.691 { 00:15:24.691 "name": "BaseBdev1", 00:15:24.691 "uuid": "0b58a785-51d4-4093-a535-426e7ac9aaba", 00:15:24.691 "is_configured": true, 00:15:24.691 "data_offset": 256, 00:15:24.691 "data_size": 7936 00:15:24.691 }, 00:15:24.691 { 00:15:24.691 "name": "BaseBdev2", 00:15:24.691 "uuid": "dc6f2ee8-aff6-451b-b9a1-fb5483c0acda", 00:15:24.691 "is_configured": true, 00:15:24.691 "data_offset": 256, 00:15:24.691 "data_size": 7936 00:15:24.691 } 00:15:24.691 ] 00:15:24.691 }' 00:15:24.691 18:46:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:24.691 18:46:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:25.259 18:46:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:25.259 18:46:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:25.259 18:46:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:25.259 18:46:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:25.259 18:46:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:15:25.259 18:46:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:25.259 18:46:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:25.259 18:46:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.259 18:46:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:25.259 18:46:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:25.259 [2024-12-15 18:46:25.492775] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:25.259 18:46:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.259 18:46:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:25.259 "name": "Existed_Raid", 00:15:25.259 "aliases": [ 00:15:25.259 "55ed3b5a-510f-4622-9b83-3687c793b53d" 00:15:25.259 ], 00:15:25.259 "product_name": "Raid Volume", 00:15:25.259 "block_size": 4096, 00:15:25.259 "num_blocks": 7936, 00:15:25.259 "uuid": "55ed3b5a-510f-4622-9b83-3687c793b53d", 00:15:25.259 "md_size": 32, 00:15:25.259 "md_interleave": false, 00:15:25.259 "dif_type": 0, 00:15:25.259 "assigned_rate_limits": { 00:15:25.259 "rw_ios_per_sec": 0, 00:15:25.259 "rw_mbytes_per_sec": 0, 00:15:25.259 "r_mbytes_per_sec": 0, 00:15:25.259 "w_mbytes_per_sec": 0 00:15:25.259 }, 00:15:25.259 "claimed": false, 00:15:25.259 "zoned": false, 00:15:25.259 "supported_io_types": { 00:15:25.259 "read": true, 00:15:25.259 "write": true, 00:15:25.259 "unmap": false, 00:15:25.259 "flush": false, 00:15:25.259 "reset": true, 00:15:25.259 "nvme_admin": false, 00:15:25.259 "nvme_io": false, 00:15:25.259 "nvme_io_md": false, 00:15:25.259 "write_zeroes": true, 00:15:25.259 "zcopy": false, 00:15:25.259 "get_zone_info": false, 00:15:25.259 "zone_management": false, 00:15:25.259 "zone_append": false, 00:15:25.259 "compare": false, 00:15:25.259 "compare_and_write": false, 00:15:25.259 "abort": false, 00:15:25.259 "seek_hole": false, 00:15:25.259 "seek_data": false, 00:15:25.259 "copy": false, 00:15:25.259 "nvme_iov_md": false 00:15:25.259 }, 00:15:25.259 "memory_domains": [ 00:15:25.259 { 00:15:25.259 "dma_device_id": "system", 00:15:25.259 "dma_device_type": 1 00:15:25.259 }, 00:15:25.259 { 00:15:25.259 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:25.259 "dma_device_type": 2 00:15:25.259 }, 00:15:25.259 { 00:15:25.259 "dma_device_id": "system", 00:15:25.259 "dma_device_type": 1 00:15:25.259 }, 00:15:25.259 { 00:15:25.259 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:25.259 "dma_device_type": 2 00:15:25.259 } 00:15:25.259 ], 00:15:25.259 "driver_specific": { 00:15:25.259 "raid": { 00:15:25.259 "uuid": "55ed3b5a-510f-4622-9b83-3687c793b53d", 00:15:25.259 "strip_size_kb": 0, 00:15:25.259 "state": "online", 00:15:25.259 "raid_level": "raid1", 00:15:25.259 "superblock": true, 00:15:25.259 "num_base_bdevs": 2, 00:15:25.259 "num_base_bdevs_discovered": 2, 00:15:25.259 "num_base_bdevs_operational": 2, 00:15:25.259 "base_bdevs_list": [ 00:15:25.259 { 00:15:25.259 "name": "BaseBdev1", 00:15:25.259 "uuid": "0b58a785-51d4-4093-a535-426e7ac9aaba", 00:15:25.259 "is_configured": true, 00:15:25.259 "data_offset": 256, 00:15:25.259 "data_size": 7936 00:15:25.259 }, 00:15:25.259 { 00:15:25.259 "name": "BaseBdev2", 00:15:25.259 "uuid": "dc6f2ee8-aff6-451b-b9a1-fb5483c0acda", 00:15:25.259 "is_configured": true, 00:15:25.259 "data_offset": 256, 00:15:25.259 "data_size": 7936 00:15:25.259 } 00:15:25.259 ] 00:15:25.259 } 00:15:25.259 } 00:15:25.259 }' 00:15:25.259 18:46:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:25.259 18:46:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:25.259 BaseBdev2' 00:15:25.259 18:46:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:25.259 18:46:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:15:25.259 18:46:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:25.259 18:46:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:25.259 18:46:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:25.259 18:46:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.259 18:46:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:25.259 18:46:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.259 18:46:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:15:25.259 18:46:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:15:25.259 18:46:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:25.260 18:46:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:25.260 18:46:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:25.260 18:46:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.260 18:46:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:25.260 18:46:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.520 18:46:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:15:25.520 18:46:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:15:25.520 18:46:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:25.520 18:46:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.520 18:46:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:25.520 [2024-12-15 18:46:25.712207] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:25.520 18:46:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.520 18:46:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:25.520 18:46:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:15:25.520 18:46:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:25.520 18:46:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:15:25.520 18:46:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:15:25.520 18:46:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:15:25.520 18:46:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:25.520 18:46:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:25.520 18:46:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:25.520 18:46:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:25.520 18:46:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:25.520 18:46:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:25.520 18:46:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:25.520 18:46:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:25.520 18:46:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:25.520 18:46:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.520 18:46:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:25.520 18:46:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.520 18:46:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:25.520 18:46:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.520 18:46:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:25.520 "name": "Existed_Raid", 00:15:25.520 "uuid": "55ed3b5a-510f-4622-9b83-3687c793b53d", 00:15:25.520 "strip_size_kb": 0, 00:15:25.520 "state": "online", 00:15:25.520 "raid_level": "raid1", 00:15:25.520 "superblock": true, 00:15:25.520 "num_base_bdevs": 2, 00:15:25.520 "num_base_bdevs_discovered": 1, 00:15:25.520 "num_base_bdevs_operational": 1, 00:15:25.520 "base_bdevs_list": [ 00:15:25.520 { 00:15:25.520 "name": null, 00:15:25.520 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.520 "is_configured": false, 00:15:25.520 "data_offset": 0, 00:15:25.520 "data_size": 7936 00:15:25.520 }, 00:15:25.520 { 00:15:25.520 "name": "BaseBdev2", 00:15:25.520 "uuid": "dc6f2ee8-aff6-451b-b9a1-fb5483c0acda", 00:15:25.520 "is_configured": true, 00:15:25.520 "data_offset": 256, 00:15:25.520 "data_size": 7936 00:15:25.520 } 00:15:25.520 ] 00:15:25.520 }' 00:15:25.520 18:46:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:25.520 18:46:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:25.780 18:46:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:25.780 18:46:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:25.780 18:46:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:25.780 18:46:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.780 18:46:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.780 18:46:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:25.780 18:46:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.780 18:46:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:25.780 18:46:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:25.780 18:46:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:25.780 18:46:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.780 18:46:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:25.780 [2024-12-15 18:46:26.187573] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:25.780 [2024-12-15 18:46:26.187671] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:25.780 [2024-12-15 18:46:26.200055] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:25.780 [2024-12-15 18:46:26.200153] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:25.780 [2024-12-15 18:46:26.200194] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:15:25.780 18:46:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.780 18:46:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:25.780 18:46:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:25.780 18:46:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.780 18:46:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:25.780 18:46:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.780 18:46:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:25.780 18:46:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.040 18:46:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:26.040 18:46:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:26.040 18:46:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:15:26.040 18:46:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@326 -- # killprocess 99467 00:15:26.040 18:46:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 99467 ']' 00:15:26.040 18:46:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 99467 00:15:26.040 18:46:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:15:26.040 18:46:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:26.040 18:46:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 99467 00:15:26.040 18:46:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:26.040 killing process with pid 99467 00:15:26.040 18:46:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:26.040 18:46:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 99467' 00:15:26.040 18:46:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 99467 00:15:26.040 [2024-12-15 18:46:26.282596] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:26.040 18:46:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 99467 00:15:26.040 [2024-12-15 18:46:26.283553] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:26.300 18:46:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@328 -- # return 0 00:15:26.301 00:15:26.301 real 0m3.846s 00:15:26.301 user 0m6.024s 00:15:26.301 sys 0m0.851s 00:15:26.301 18:46:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:26.301 ************************************ 00:15:26.301 END TEST raid_state_function_test_sb_md_separate 00:15:26.301 ************************************ 00:15:26.301 18:46:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:26.301 18:46:26 bdev_raid -- bdev/bdev_raid.sh@1005 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:15:26.301 18:46:26 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:15:26.301 18:46:26 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:26.301 18:46:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:26.301 ************************************ 00:15:26.301 START TEST raid_superblock_test_md_separate 00:15:26.301 ************************************ 00:15:26.301 18:46:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:15:26.301 18:46:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:15:26.301 18:46:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:15:26.301 18:46:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:15:26.301 18:46:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:15:26.301 18:46:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:15:26.301 18:46:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:15:26.301 18:46:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:15:26.301 18:46:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:15:26.301 18:46:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:15:26.301 18:46:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size 00:15:26.301 18:46:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:15:26.301 18:46:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:15:26.301 18:46:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:15:26.301 18:46:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:15:26.301 18:46:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:15:26.301 18:46:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # raid_pid=99702 00:15:26.301 18:46:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:15:26.301 18:46:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@413 -- # waitforlisten 99702 00:15:26.301 18:46:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@835 -- # '[' -z 99702 ']' 00:15:26.301 18:46:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:26.301 18:46:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:26.301 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:26.301 18:46:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:26.301 18:46:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:26.301 18:46:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:26.301 [2024-12-15 18:46:26.685075] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:15:26.301 [2024-12-15 18:46:26.685223] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99702 ] 00:15:26.560 [2024-12-15 18:46:26.861696] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:26.560 [2024-12-15 18:46:26.888304] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:26.560 [2024-12-15 18:46:26.931297] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:26.560 [2024-12-15 18:46:26.931337] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:27.128 18:46:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:27.128 18:46:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@868 -- # return 0 00:15:27.128 18:46:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:15:27.128 18:46:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:27.128 18:46:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:15:27.128 18:46:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:15:27.128 18:46:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:27.128 18:46:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:27.128 18:46:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:27.128 18:46:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:27.128 18:46:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc1 00:15:27.128 18:46:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.128 18:46:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:27.128 malloc1 00:15:27.128 18:46:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.128 18:46:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:27.128 18:46:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.128 18:46:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:27.128 [2024-12-15 18:46:27.527270] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:27.128 [2024-12-15 18:46:27.527422] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:27.128 [2024-12-15 18:46:27.527465] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:27.128 [2024-12-15 18:46:27.527505] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:27.128 [2024-12-15 18:46:27.529471] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:27.128 [2024-12-15 18:46:27.529547] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:27.128 pt1 00:15:27.128 18:46:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.128 18:46:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:27.128 18:46:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:27.128 18:46:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:15:27.129 18:46:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:15:27.129 18:46:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:27.129 18:46:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:27.129 18:46:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:27.129 18:46:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:27.129 18:46:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc2 00:15:27.129 18:46:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.129 18:46:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:27.129 malloc2 00:15:27.129 18:46:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.129 18:46:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:27.129 18:46:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.129 18:46:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:27.129 [2024-12-15 18:46:27.560214] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:27.129 [2024-12-15 18:46:27.560325] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:27.129 [2024-12-15 18:46:27.560346] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:27.129 [2024-12-15 18:46:27.560357] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:27.129 [2024-12-15 18:46:27.562178] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:27.129 [2024-12-15 18:46:27.562219] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:27.129 pt2 00:15:27.129 18:46:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.129 18:46:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:27.129 18:46:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:27.129 18:46:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:15:27.129 18:46:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.129 18:46:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:27.389 [2024-12-15 18:46:27.572226] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:27.389 [2024-12-15 18:46:27.574153] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:27.389 [2024-12-15 18:46:27.574334] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:15:27.389 [2024-12-15 18:46:27.574386] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:27.389 [2024-12-15 18:46:27.574488] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:15:27.389 [2024-12-15 18:46:27.574640] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:15:27.389 [2024-12-15 18:46:27.574680] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:15:27.389 [2024-12-15 18:46:27.574797] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:27.389 18:46:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.389 18:46:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:27.389 18:46:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:27.389 18:46:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:27.389 18:46:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:27.389 18:46:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:27.389 18:46:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:27.389 18:46:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:27.389 18:46:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:27.389 18:46:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:27.389 18:46:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:27.389 18:46:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.389 18:46:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:27.389 18:46:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.389 18:46:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:27.389 18:46:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.389 18:46:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:27.389 "name": "raid_bdev1", 00:15:27.389 "uuid": "7eb25afc-b3aa-427c-9231-3d5055e18eae", 00:15:27.389 "strip_size_kb": 0, 00:15:27.389 "state": "online", 00:15:27.389 "raid_level": "raid1", 00:15:27.389 "superblock": true, 00:15:27.389 "num_base_bdevs": 2, 00:15:27.389 "num_base_bdevs_discovered": 2, 00:15:27.389 "num_base_bdevs_operational": 2, 00:15:27.389 "base_bdevs_list": [ 00:15:27.389 { 00:15:27.389 "name": "pt1", 00:15:27.389 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:27.389 "is_configured": true, 00:15:27.389 "data_offset": 256, 00:15:27.389 "data_size": 7936 00:15:27.389 }, 00:15:27.389 { 00:15:27.389 "name": "pt2", 00:15:27.389 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:27.389 "is_configured": true, 00:15:27.389 "data_offset": 256, 00:15:27.389 "data_size": 7936 00:15:27.389 } 00:15:27.389 ] 00:15:27.389 }' 00:15:27.389 18:46:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:27.389 18:46:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:27.649 18:46:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:15:27.649 18:46:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:27.649 18:46:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:27.649 18:46:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:27.649 18:46:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:15:27.649 18:46:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:27.649 18:46:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:27.649 18:46:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:27.649 18:46:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.649 18:46:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:27.649 [2024-12-15 18:46:28.007732] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:27.649 18:46:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.649 18:46:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:27.649 "name": "raid_bdev1", 00:15:27.649 "aliases": [ 00:15:27.649 "7eb25afc-b3aa-427c-9231-3d5055e18eae" 00:15:27.649 ], 00:15:27.649 "product_name": "Raid Volume", 00:15:27.649 "block_size": 4096, 00:15:27.649 "num_blocks": 7936, 00:15:27.649 "uuid": "7eb25afc-b3aa-427c-9231-3d5055e18eae", 00:15:27.649 "md_size": 32, 00:15:27.649 "md_interleave": false, 00:15:27.649 "dif_type": 0, 00:15:27.650 "assigned_rate_limits": { 00:15:27.650 "rw_ios_per_sec": 0, 00:15:27.650 "rw_mbytes_per_sec": 0, 00:15:27.650 "r_mbytes_per_sec": 0, 00:15:27.650 "w_mbytes_per_sec": 0 00:15:27.650 }, 00:15:27.650 "claimed": false, 00:15:27.650 "zoned": false, 00:15:27.650 "supported_io_types": { 00:15:27.650 "read": true, 00:15:27.650 "write": true, 00:15:27.650 "unmap": false, 00:15:27.650 "flush": false, 00:15:27.650 "reset": true, 00:15:27.650 "nvme_admin": false, 00:15:27.650 "nvme_io": false, 00:15:27.650 "nvme_io_md": false, 00:15:27.650 "write_zeroes": true, 00:15:27.650 "zcopy": false, 00:15:27.650 "get_zone_info": false, 00:15:27.650 "zone_management": false, 00:15:27.650 "zone_append": false, 00:15:27.650 "compare": false, 00:15:27.650 "compare_and_write": false, 00:15:27.650 "abort": false, 00:15:27.650 "seek_hole": false, 00:15:27.650 "seek_data": false, 00:15:27.650 "copy": false, 00:15:27.650 "nvme_iov_md": false 00:15:27.650 }, 00:15:27.650 "memory_domains": [ 00:15:27.650 { 00:15:27.650 "dma_device_id": "system", 00:15:27.650 "dma_device_type": 1 00:15:27.650 }, 00:15:27.650 { 00:15:27.650 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:27.650 "dma_device_type": 2 00:15:27.650 }, 00:15:27.650 { 00:15:27.650 "dma_device_id": "system", 00:15:27.650 "dma_device_type": 1 00:15:27.650 }, 00:15:27.650 { 00:15:27.650 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:27.650 "dma_device_type": 2 00:15:27.650 } 00:15:27.650 ], 00:15:27.650 "driver_specific": { 00:15:27.650 "raid": { 00:15:27.650 "uuid": "7eb25afc-b3aa-427c-9231-3d5055e18eae", 00:15:27.650 "strip_size_kb": 0, 00:15:27.650 "state": "online", 00:15:27.650 "raid_level": "raid1", 00:15:27.650 "superblock": true, 00:15:27.650 "num_base_bdevs": 2, 00:15:27.650 "num_base_bdevs_discovered": 2, 00:15:27.650 "num_base_bdevs_operational": 2, 00:15:27.650 "base_bdevs_list": [ 00:15:27.650 { 00:15:27.650 "name": "pt1", 00:15:27.650 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:27.650 "is_configured": true, 00:15:27.650 "data_offset": 256, 00:15:27.650 "data_size": 7936 00:15:27.650 }, 00:15:27.650 { 00:15:27.650 "name": "pt2", 00:15:27.650 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:27.650 "is_configured": true, 00:15:27.650 "data_offset": 256, 00:15:27.650 "data_size": 7936 00:15:27.650 } 00:15:27.650 ] 00:15:27.650 } 00:15:27.650 } 00:15:27.650 }' 00:15:27.650 18:46:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:27.650 18:46:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:27.650 pt2' 00:15:27.650 18:46:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:27.910 18:46:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:15:27.910 18:46:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:27.910 18:46:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:27.910 18:46:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:27.910 18:46:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.910 18:46:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:27.910 18:46:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.910 18:46:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:15:27.910 18:46:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:15:27.910 18:46:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:27.910 18:46:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:27.910 18:46:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:27.910 18:46:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.910 18:46:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:27.910 18:46:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.910 18:46:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:15:27.910 18:46:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:15:27.910 18:46:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:27.910 18:46:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:15:27.910 18:46:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.910 18:46:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:27.910 [2024-12-15 18:46:28.231335] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:27.910 18:46:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.910 18:46:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=7eb25afc-b3aa-427c-9231-3d5055e18eae 00:15:27.910 18:46:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # '[' -z 7eb25afc-b3aa-427c-9231-3d5055e18eae ']' 00:15:27.910 18:46:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:27.910 18:46:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.910 18:46:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:27.910 [2024-12-15 18:46:28.275007] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:27.910 [2024-12-15 18:46:28.275032] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:27.910 [2024-12-15 18:46:28.275104] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:27.910 [2024-12-15 18:46:28.275152] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:27.910 [2024-12-15 18:46:28.275194] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:15:27.910 18:46:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.910 18:46:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.910 18:46:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:15:27.910 18:46:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.910 18:46:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:27.910 18:46:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.910 18:46:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:15:27.910 18:46:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:15:27.910 18:46:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:27.910 18:46:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:15:27.910 18:46:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.910 18:46:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:27.910 18:46:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.910 18:46:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:27.910 18:46:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:15:27.910 18:46:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.910 18:46:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:28.170 18:46:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.170 18:46:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:15:28.170 18:46:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.170 18:46:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:28.170 18:46:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:28.170 18:46:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.170 18:46:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:15:28.170 18:46:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:15:28.170 18:46:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:15:28.170 18:46:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:15:28.170 18:46:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:28.170 18:46:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:28.170 18:46:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:28.170 18:46:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:28.170 18:46:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:15:28.170 18:46:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.170 18:46:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:28.170 [2024-12-15 18:46:28.414833] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:28.170 [2024-12-15 18:46:28.416628] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:28.170 [2024-12-15 18:46:28.416725] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:15:28.170 [2024-12-15 18:46:28.416833] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:15:28.170 [2024-12-15 18:46:28.416875] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:28.170 [2024-12-15 18:46:28.416895] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:15:28.170 request: 00:15:28.170 { 00:15:28.170 "name": "raid_bdev1", 00:15:28.170 "raid_level": "raid1", 00:15:28.170 "base_bdevs": [ 00:15:28.170 "malloc1", 00:15:28.170 "malloc2" 00:15:28.170 ], 00:15:28.170 "superblock": false, 00:15:28.170 "method": "bdev_raid_create", 00:15:28.170 "req_id": 1 00:15:28.170 } 00:15:28.170 Got JSON-RPC error response 00:15:28.170 response: 00:15:28.170 { 00:15:28.170 "code": -17, 00:15:28.170 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:28.170 } 00:15:28.170 18:46:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:28.170 18:46:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # es=1 00:15:28.170 18:46:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:28.170 18:46:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:28.170 18:46:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:28.170 18:46:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.170 18:46:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:15:28.170 18:46:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.170 18:46:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:28.170 18:46:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.170 18:46:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:15:28.170 18:46:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:15:28.170 18:46:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:28.170 18:46:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.170 18:46:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:28.170 [2024-12-15 18:46:28.482659] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:28.170 [2024-12-15 18:46:28.482751] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:28.170 [2024-12-15 18:46:28.482783] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:28.170 [2024-12-15 18:46:28.482823] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:28.170 [2024-12-15 18:46:28.484585] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:28.170 [2024-12-15 18:46:28.484655] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:28.170 [2024-12-15 18:46:28.484716] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:28.170 [2024-12-15 18:46:28.484790] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:28.170 pt1 00:15:28.170 18:46:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.170 18:46:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:15:28.170 18:46:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:28.170 18:46:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:28.170 18:46:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:28.170 18:46:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:28.170 18:46:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:28.170 18:46:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:28.170 18:46:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:28.170 18:46:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:28.170 18:46:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:28.170 18:46:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.170 18:46:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:28.170 18:46:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.170 18:46:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:28.170 18:46:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.170 18:46:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:28.171 "name": "raid_bdev1", 00:15:28.171 "uuid": "7eb25afc-b3aa-427c-9231-3d5055e18eae", 00:15:28.171 "strip_size_kb": 0, 00:15:28.171 "state": "configuring", 00:15:28.171 "raid_level": "raid1", 00:15:28.171 "superblock": true, 00:15:28.171 "num_base_bdevs": 2, 00:15:28.171 "num_base_bdevs_discovered": 1, 00:15:28.171 "num_base_bdevs_operational": 2, 00:15:28.171 "base_bdevs_list": [ 00:15:28.171 { 00:15:28.171 "name": "pt1", 00:15:28.171 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:28.171 "is_configured": true, 00:15:28.171 "data_offset": 256, 00:15:28.171 "data_size": 7936 00:15:28.171 }, 00:15:28.171 { 00:15:28.171 "name": null, 00:15:28.171 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:28.171 "is_configured": false, 00:15:28.171 "data_offset": 256, 00:15:28.171 "data_size": 7936 00:15:28.171 } 00:15:28.171 ] 00:15:28.171 }' 00:15:28.171 18:46:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:28.171 18:46:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:28.740 18:46:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:15:28.740 18:46:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:15:28.740 18:46:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:28.740 18:46:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:28.740 18:46:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.740 18:46:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:28.740 [2024-12-15 18:46:28.961883] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:28.740 [2024-12-15 18:46:28.961954] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:28.740 [2024-12-15 18:46:28.961975] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:15:28.740 [2024-12-15 18:46:28.961983] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:28.740 [2024-12-15 18:46:28.962111] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:28.740 [2024-12-15 18:46:28.962124] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:28.740 [2024-12-15 18:46:28.962164] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:28.740 [2024-12-15 18:46:28.962179] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:28.740 [2024-12-15 18:46:28.962251] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:15:28.740 [2024-12-15 18:46:28.962259] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:28.740 [2024-12-15 18:46:28.962323] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:15:28.740 [2024-12-15 18:46:28.962403] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:15:28.740 [2024-12-15 18:46:28.962417] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:15:28.740 [2024-12-15 18:46:28.962472] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:28.740 pt2 00:15:28.740 18:46:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.740 18:46:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:28.740 18:46:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:28.740 18:46:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:28.740 18:46:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:28.740 18:46:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:28.740 18:46:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:28.740 18:46:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:28.740 18:46:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:28.740 18:46:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:28.740 18:46:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:28.740 18:46:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:28.740 18:46:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:28.740 18:46:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:28.740 18:46:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.740 18:46:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.740 18:46:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:28.740 18:46:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.740 18:46:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:28.740 "name": "raid_bdev1", 00:15:28.740 "uuid": "7eb25afc-b3aa-427c-9231-3d5055e18eae", 00:15:28.740 "strip_size_kb": 0, 00:15:28.740 "state": "online", 00:15:28.740 "raid_level": "raid1", 00:15:28.740 "superblock": true, 00:15:28.740 "num_base_bdevs": 2, 00:15:28.740 "num_base_bdevs_discovered": 2, 00:15:28.740 "num_base_bdevs_operational": 2, 00:15:28.740 "base_bdevs_list": [ 00:15:28.740 { 00:15:28.740 "name": "pt1", 00:15:28.740 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:28.740 "is_configured": true, 00:15:28.740 "data_offset": 256, 00:15:28.740 "data_size": 7936 00:15:28.740 }, 00:15:28.740 { 00:15:28.740 "name": "pt2", 00:15:28.740 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:28.740 "is_configured": true, 00:15:28.740 "data_offset": 256, 00:15:28.740 "data_size": 7936 00:15:28.740 } 00:15:28.740 ] 00:15:28.740 }' 00:15:28.740 18:46:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:28.740 18:46:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:29.000 18:46:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:15:29.000 18:46:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:29.000 18:46:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:29.000 18:46:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:29.000 18:46:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:15:29.000 18:46:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:29.000 18:46:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:29.000 18:46:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:29.000 18:46:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.000 18:46:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:29.000 [2024-12-15 18:46:29.425295] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:29.260 18:46:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.260 18:46:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:29.260 "name": "raid_bdev1", 00:15:29.260 "aliases": [ 00:15:29.260 "7eb25afc-b3aa-427c-9231-3d5055e18eae" 00:15:29.260 ], 00:15:29.260 "product_name": "Raid Volume", 00:15:29.260 "block_size": 4096, 00:15:29.260 "num_blocks": 7936, 00:15:29.260 "uuid": "7eb25afc-b3aa-427c-9231-3d5055e18eae", 00:15:29.260 "md_size": 32, 00:15:29.260 "md_interleave": false, 00:15:29.260 "dif_type": 0, 00:15:29.260 "assigned_rate_limits": { 00:15:29.260 "rw_ios_per_sec": 0, 00:15:29.260 "rw_mbytes_per_sec": 0, 00:15:29.260 "r_mbytes_per_sec": 0, 00:15:29.260 "w_mbytes_per_sec": 0 00:15:29.260 }, 00:15:29.260 "claimed": false, 00:15:29.260 "zoned": false, 00:15:29.260 "supported_io_types": { 00:15:29.260 "read": true, 00:15:29.260 "write": true, 00:15:29.260 "unmap": false, 00:15:29.260 "flush": false, 00:15:29.260 "reset": true, 00:15:29.260 "nvme_admin": false, 00:15:29.260 "nvme_io": false, 00:15:29.260 "nvme_io_md": false, 00:15:29.260 "write_zeroes": true, 00:15:29.260 "zcopy": false, 00:15:29.260 "get_zone_info": false, 00:15:29.260 "zone_management": false, 00:15:29.260 "zone_append": false, 00:15:29.260 "compare": false, 00:15:29.260 "compare_and_write": false, 00:15:29.260 "abort": false, 00:15:29.260 "seek_hole": false, 00:15:29.260 "seek_data": false, 00:15:29.260 "copy": false, 00:15:29.260 "nvme_iov_md": false 00:15:29.260 }, 00:15:29.260 "memory_domains": [ 00:15:29.260 { 00:15:29.260 "dma_device_id": "system", 00:15:29.260 "dma_device_type": 1 00:15:29.260 }, 00:15:29.260 { 00:15:29.260 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:29.260 "dma_device_type": 2 00:15:29.260 }, 00:15:29.260 { 00:15:29.260 "dma_device_id": "system", 00:15:29.260 "dma_device_type": 1 00:15:29.260 }, 00:15:29.260 { 00:15:29.260 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:29.260 "dma_device_type": 2 00:15:29.260 } 00:15:29.260 ], 00:15:29.260 "driver_specific": { 00:15:29.260 "raid": { 00:15:29.260 "uuid": "7eb25afc-b3aa-427c-9231-3d5055e18eae", 00:15:29.260 "strip_size_kb": 0, 00:15:29.260 "state": "online", 00:15:29.260 "raid_level": "raid1", 00:15:29.260 "superblock": true, 00:15:29.260 "num_base_bdevs": 2, 00:15:29.260 "num_base_bdevs_discovered": 2, 00:15:29.260 "num_base_bdevs_operational": 2, 00:15:29.260 "base_bdevs_list": [ 00:15:29.260 { 00:15:29.260 "name": "pt1", 00:15:29.260 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:29.260 "is_configured": true, 00:15:29.260 "data_offset": 256, 00:15:29.260 "data_size": 7936 00:15:29.260 }, 00:15:29.260 { 00:15:29.260 "name": "pt2", 00:15:29.260 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:29.260 "is_configured": true, 00:15:29.260 "data_offset": 256, 00:15:29.260 "data_size": 7936 00:15:29.260 } 00:15:29.260 ] 00:15:29.260 } 00:15:29.260 } 00:15:29.260 }' 00:15:29.260 18:46:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:29.260 18:46:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:29.260 pt2' 00:15:29.260 18:46:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:29.260 18:46:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:15:29.260 18:46:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:29.260 18:46:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:29.261 18:46:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:29.261 18:46:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.261 18:46:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:29.261 18:46:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.261 18:46:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:15:29.261 18:46:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:15:29.261 18:46:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:29.261 18:46:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:29.261 18:46:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.261 18:46:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:29.261 18:46:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:29.261 18:46:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.261 18:46:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:15:29.261 18:46:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:15:29.261 18:46:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:29.261 18:46:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:15:29.261 18:46:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.261 18:46:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:29.261 [2024-12-15 18:46:29.649079] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:29.261 18:46:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.261 18:46:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # '[' 7eb25afc-b3aa-427c-9231-3d5055e18eae '!=' 7eb25afc-b3aa-427c-9231-3d5055e18eae ']' 00:15:29.261 18:46:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:15:29.261 18:46:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:29.261 18:46:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:15:29.261 18:46:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:15:29.261 18:46:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.261 18:46:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:29.261 [2024-12-15 18:46:29.696853] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:15:29.521 18:46:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.521 18:46:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:29.521 18:46:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:29.521 18:46:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:29.521 18:46:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:29.521 18:46:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:29.521 18:46:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:29.521 18:46:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:29.521 18:46:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:29.521 18:46:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:29.521 18:46:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:29.521 18:46:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.521 18:46:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.521 18:46:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:29.521 18:46:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:29.521 18:46:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.521 18:46:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:29.521 "name": "raid_bdev1", 00:15:29.521 "uuid": "7eb25afc-b3aa-427c-9231-3d5055e18eae", 00:15:29.521 "strip_size_kb": 0, 00:15:29.521 "state": "online", 00:15:29.521 "raid_level": "raid1", 00:15:29.521 "superblock": true, 00:15:29.521 "num_base_bdevs": 2, 00:15:29.521 "num_base_bdevs_discovered": 1, 00:15:29.521 "num_base_bdevs_operational": 1, 00:15:29.521 "base_bdevs_list": [ 00:15:29.521 { 00:15:29.521 "name": null, 00:15:29.521 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:29.521 "is_configured": false, 00:15:29.521 "data_offset": 0, 00:15:29.521 "data_size": 7936 00:15:29.521 }, 00:15:29.521 { 00:15:29.521 "name": "pt2", 00:15:29.521 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:29.521 "is_configured": true, 00:15:29.521 "data_offset": 256, 00:15:29.521 "data_size": 7936 00:15:29.521 } 00:15:29.521 ] 00:15:29.521 }' 00:15:29.521 18:46:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:29.521 18:46:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:29.781 18:46:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:29.781 18:46:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.781 18:46:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:29.781 [2024-12-15 18:46:30.140232] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:29.781 [2024-12-15 18:46:30.140313] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:29.781 [2024-12-15 18:46:30.140385] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:29.781 [2024-12-15 18:46:30.140440] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:29.781 [2024-12-15 18:46:30.140471] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:15:29.781 18:46:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.781 18:46:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.781 18:46:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:15:29.781 18:46:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.781 18:46:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:29.781 18:46:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.781 18:46:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:15:29.781 18:46:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:15:29.781 18:46:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:15:29.781 18:46:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:29.781 18:46:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:15:29.781 18:46:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.781 18:46:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:29.781 18:46:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.781 18:46:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:29.781 18:46:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:29.781 18:46:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:15:29.781 18:46:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:29.781 18:46:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # i=1 00:15:29.781 18:46:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:29.781 18:46:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.781 18:46:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:29.781 [2024-12-15 18:46:30.216128] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:29.781 [2024-12-15 18:46:30.216178] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:29.781 [2024-12-15 18:46:30.216194] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:15:29.781 [2024-12-15 18:46:30.216203] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:29.781 [2024-12-15 18:46:30.218119] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:29.781 [2024-12-15 18:46:30.218202] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:29.781 [2024-12-15 18:46:30.218257] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:29.781 [2024-12-15 18:46:30.218286] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:29.781 [2024-12-15 18:46:30.218348] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:15:29.782 [2024-12-15 18:46:30.218356] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:29.782 [2024-12-15 18:46:30.218445] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:15:29.782 [2024-12-15 18:46:30.218524] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:15:29.782 [2024-12-15 18:46:30.218533] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:15:29.782 [2024-12-15 18:46:30.218593] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:30.041 pt2 00:15:30.042 18:46:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.042 18:46:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:30.042 18:46:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:30.042 18:46:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:30.042 18:46:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:30.042 18:46:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:30.042 18:46:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:30.042 18:46:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:30.042 18:46:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:30.042 18:46:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:30.042 18:46:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:30.042 18:46:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.042 18:46:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.042 18:46:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:30.042 18:46:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:30.042 18:46:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.042 18:46:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:30.042 "name": "raid_bdev1", 00:15:30.042 "uuid": "7eb25afc-b3aa-427c-9231-3d5055e18eae", 00:15:30.042 "strip_size_kb": 0, 00:15:30.042 "state": "online", 00:15:30.042 "raid_level": "raid1", 00:15:30.042 "superblock": true, 00:15:30.042 "num_base_bdevs": 2, 00:15:30.042 "num_base_bdevs_discovered": 1, 00:15:30.042 "num_base_bdevs_operational": 1, 00:15:30.042 "base_bdevs_list": [ 00:15:30.042 { 00:15:30.042 "name": null, 00:15:30.042 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:30.042 "is_configured": false, 00:15:30.042 "data_offset": 256, 00:15:30.042 "data_size": 7936 00:15:30.042 }, 00:15:30.042 { 00:15:30.042 "name": "pt2", 00:15:30.042 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:30.042 "is_configured": true, 00:15:30.042 "data_offset": 256, 00:15:30.042 "data_size": 7936 00:15:30.042 } 00:15:30.042 ] 00:15:30.042 }' 00:15:30.042 18:46:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:30.042 18:46:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:30.302 18:46:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:30.302 18:46:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.302 18:46:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:30.302 [2024-12-15 18:46:30.647397] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:30.302 [2024-12-15 18:46:30.647473] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:30.302 [2024-12-15 18:46:30.647540] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:30.302 [2024-12-15 18:46:30.647591] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:30.302 [2024-12-15 18:46:30.647623] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:15:30.303 18:46:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.303 18:46:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.303 18:46:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:15:30.303 18:46:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.303 18:46:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:30.303 18:46:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.303 18:46:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:15:30.303 18:46:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:15:30.303 18:46:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:15:30.303 18:46:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:30.303 18:46:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.303 18:46:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:30.303 [2024-12-15 18:46:30.707287] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:30.303 [2024-12-15 18:46:30.707380] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:30.303 [2024-12-15 18:46:30.707410] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:15:30.303 [2024-12-15 18:46:30.707439] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:30.303 [2024-12-15 18:46:30.709269] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:30.303 [2024-12-15 18:46:30.709343] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:30.303 [2024-12-15 18:46:30.709404] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:30.303 [2024-12-15 18:46:30.709457] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:30.303 [2024-12-15 18:46:30.709568] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:15:30.303 [2024-12-15 18:46:30.709619] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:30.303 [2024-12-15 18:46:30.709665] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state configuring 00:15:30.303 [2024-12-15 18:46:30.709740] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:30.303 [2024-12-15 18:46:30.709836] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:15:30.303 [2024-12-15 18:46:30.709878] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:30.303 [2024-12-15 18:46:30.709947] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:30.303 [2024-12-15 18:46:30.710061] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:15:30.303 [2024-12-15 18:46:30.710110] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:15:30.303 [2024-12-15 18:46:30.710218] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:30.303 pt1 00:15:30.303 18:46:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.303 18:46:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:15:30.303 18:46:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:30.303 18:46:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:30.303 18:46:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:30.303 18:46:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:30.303 18:46:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:30.303 18:46:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:30.303 18:46:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:30.303 18:46:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:30.303 18:46:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:30.303 18:46:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:30.303 18:46:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.303 18:46:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.303 18:46:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:30.303 18:46:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:30.303 18:46:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.563 18:46:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:30.563 "name": "raid_bdev1", 00:15:30.563 "uuid": "7eb25afc-b3aa-427c-9231-3d5055e18eae", 00:15:30.563 "strip_size_kb": 0, 00:15:30.563 "state": "online", 00:15:30.563 "raid_level": "raid1", 00:15:30.563 "superblock": true, 00:15:30.563 "num_base_bdevs": 2, 00:15:30.563 "num_base_bdevs_discovered": 1, 00:15:30.563 "num_base_bdevs_operational": 1, 00:15:30.563 "base_bdevs_list": [ 00:15:30.563 { 00:15:30.563 "name": null, 00:15:30.563 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:30.563 "is_configured": false, 00:15:30.563 "data_offset": 256, 00:15:30.563 "data_size": 7936 00:15:30.563 }, 00:15:30.563 { 00:15:30.563 "name": "pt2", 00:15:30.563 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:30.563 "is_configured": true, 00:15:30.563 "data_offset": 256, 00:15:30.563 "data_size": 7936 00:15:30.563 } 00:15:30.563 ] 00:15:30.563 }' 00:15:30.563 18:46:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:30.563 18:46:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:30.823 18:46:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:30.823 18:46:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:15:30.823 18:46:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.823 18:46:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:30.823 18:46:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.823 18:46:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:15:30.823 18:46:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:30.823 18:46:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:15:30.823 18:46:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.823 18:46:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:30.823 [2024-12-15 18:46:31.218625] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:30.823 18:46:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.823 18:46:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # '[' 7eb25afc-b3aa-427c-9231-3d5055e18eae '!=' 7eb25afc-b3aa-427c-9231-3d5055e18eae ']' 00:15:30.823 18:46:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@563 -- # killprocess 99702 00:15:30.823 18:46:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # '[' -z 99702 ']' 00:15:30.823 18:46:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@958 -- # kill -0 99702 00:15:30.823 18:46:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # uname 00:15:31.083 18:46:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:31.083 18:46:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 99702 00:15:31.083 18:46:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:31.083 18:46:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:31.083 18:46:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 99702' 00:15:31.083 killing process with pid 99702 00:15:31.083 18:46:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@973 -- # kill 99702 00:15:31.083 [2024-12-15 18:46:31.303880] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:31.083 [2024-12-15 18:46:31.303995] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:31.083 [2024-12-15 18:46:31.304043] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:31.083 [2024-12-15 18:46:31.304051] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:15:31.083 18:46:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@978 -- # wait 99702 00:15:31.083 [2024-12-15 18:46:31.328911] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:31.344 18:46:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@565 -- # return 0 00:15:31.344 00:15:31.344 real 0m4.971s 00:15:31.344 user 0m8.121s 00:15:31.344 sys 0m1.134s 00:15:31.344 18:46:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:31.344 18:46:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:31.344 ************************************ 00:15:31.344 END TEST raid_superblock_test_md_separate 00:15:31.344 ************************************ 00:15:31.344 18:46:31 bdev_raid -- bdev/bdev_raid.sh@1006 -- # '[' true = true ']' 00:15:31.344 18:46:31 bdev_raid -- bdev/bdev_raid.sh@1007 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:15:31.344 18:46:31 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:31.344 18:46:31 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:31.344 18:46:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:31.344 ************************************ 00:15:31.344 START TEST raid_rebuild_test_sb_md_separate 00:15:31.344 ************************************ 00:15:31.344 18:46:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:15:31.344 18:46:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:15:31.344 18:46:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:15:31.344 18:46:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:15:31.344 18:46:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:31.344 18:46:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:31.344 18:46:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:31.344 18:46:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:31.344 18:46:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:31.344 18:46:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:31.344 18:46:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:31.344 18:46:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:31.344 18:46:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:31.344 18:46:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:31.344 18:46:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:31.344 18:46:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:31.344 18:46:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:31.344 18:46:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:31.344 18:46:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:31.344 18:46:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:31.344 18:46:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:31.344 18:46:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:15:31.344 18:46:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:15:31.344 18:46:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:15:31.344 18:46:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:15:31.344 18:46:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # raid_pid=100015 00:15:31.344 18:46:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:31.344 18:46:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@598 -- # waitforlisten 100015 00:15:31.344 18:46:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 100015 ']' 00:15:31.344 18:46:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:31.344 18:46:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:31.344 18:46:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:31.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:31.344 18:46:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:31.344 18:46:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:31.344 [2024-12-15 18:46:31.738147] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:15:31.344 [2024-12-15 18:46:31.738354] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:15:31.344 Zero copy mechanism will not be used. 00:15:31.344 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100015 ] 00:15:31.605 [2024-12-15 18:46:31.910924] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:31.605 [2024-12-15 18:46:31.937974] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:31.605 [2024-12-15 18:46:31.981521] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:31.605 [2024-12-15 18:46:31.981628] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:32.175 18:46:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:32.175 18:46:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:15:32.175 18:46:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:32.175 18:46:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:15:32.175 18:46:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.175 18:46:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:32.175 BaseBdev1_malloc 00:15:32.175 18:46:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.175 18:46:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:32.175 18:46:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.175 18:46:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:32.175 [2024-12-15 18:46:32.594007] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:32.175 [2024-12-15 18:46:32.594166] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:32.175 [2024-12-15 18:46:32.594207] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:32.175 [2024-12-15 18:46:32.594237] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:32.175 [2024-12-15 18:46:32.596034] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:32.175 [2024-12-15 18:46:32.596104] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:32.175 BaseBdev1 00:15:32.175 18:46:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.175 18:46:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:32.175 18:46:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:15:32.175 18:46:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.175 18:46:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:32.435 BaseBdev2_malloc 00:15:32.435 18:46:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.435 18:46:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:32.435 18:46:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.435 18:46:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:32.435 [2024-12-15 18:46:32.623352] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:32.435 [2024-12-15 18:46:32.623407] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:32.435 [2024-12-15 18:46:32.623431] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:32.435 [2024-12-15 18:46:32.623440] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:32.435 [2024-12-15 18:46:32.625247] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:32.435 [2024-12-15 18:46:32.625285] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:32.435 BaseBdev2 00:15:32.435 18:46:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.435 18:46:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:15:32.435 18:46:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.435 18:46:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:32.435 spare_malloc 00:15:32.435 18:46:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.435 18:46:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:32.435 18:46:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.435 18:46:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:32.435 spare_delay 00:15:32.435 18:46:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.435 18:46:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:32.435 18:46:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.435 18:46:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:32.435 [2024-12-15 18:46:32.681484] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:32.435 [2024-12-15 18:46:32.681566] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:32.435 [2024-12-15 18:46:32.681605] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:15:32.435 [2024-12-15 18:46:32.681620] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:32.435 [2024-12-15 18:46:32.683989] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:32.435 [2024-12-15 18:46:32.684030] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:32.435 spare 00:15:32.436 18:46:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.436 18:46:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:15:32.436 18:46:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.436 18:46:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:32.436 [2024-12-15 18:46:32.693456] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:32.436 [2024-12-15 18:46:32.695284] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:32.436 [2024-12-15 18:46:32.695432] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:15:32.436 [2024-12-15 18:46:32.695464] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:32.436 [2024-12-15 18:46:32.695533] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:15:32.436 [2024-12-15 18:46:32.695624] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:15:32.436 [2024-12-15 18:46:32.695634] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:15:32.436 [2024-12-15 18:46:32.695719] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:32.436 18:46:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.436 18:46:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:32.436 18:46:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:32.436 18:46:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:32.436 18:46:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:32.436 18:46:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:32.436 18:46:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:32.436 18:46:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:32.436 18:46:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:32.436 18:46:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:32.436 18:46:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:32.436 18:46:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.436 18:46:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:32.436 18:46:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.436 18:46:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:32.436 18:46:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.436 18:46:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:32.436 "name": "raid_bdev1", 00:15:32.436 "uuid": "b6bfefbb-96c1-4e19-8853-f75912959668", 00:15:32.436 "strip_size_kb": 0, 00:15:32.436 "state": "online", 00:15:32.436 "raid_level": "raid1", 00:15:32.436 "superblock": true, 00:15:32.436 "num_base_bdevs": 2, 00:15:32.436 "num_base_bdevs_discovered": 2, 00:15:32.436 "num_base_bdevs_operational": 2, 00:15:32.436 "base_bdevs_list": [ 00:15:32.436 { 00:15:32.436 "name": "BaseBdev1", 00:15:32.436 "uuid": "5a460598-faa7-5725-8c07-d901a83f320f", 00:15:32.436 "is_configured": true, 00:15:32.436 "data_offset": 256, 00:15:32.436 "data_size": 7936 00:15:32.436 }, 00:15:32.436 { 00:15:32.436 "name": "BaseBdev2", 00:15:32.436 "uuid": "51bc37ca-c7ee-5b51-8c67-5ddfa1baa2a1", 00:15:32.436 "is_configured": true, 00:15:32.436 "data_offset": 256, 00:15:32.436 "data_size": 7936 00:15:32.436 } 00:15:32.436 ] 00:15:32.436 }' 00:15:32.436 18:46:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:32.436 18:46:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:33.006 18:46:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:33.006 18:46:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:33.006 18:46:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.006 18:46:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:33.006 [2024-12-15 18:46:33.148956] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:33.006 18:46:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.006 18:46:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:15:33.006 18:46:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:33.006 18:46:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.006 18:46:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:33.006 18:46:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:33.006 18:46:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.006 18:46:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:15:33.006 18:46:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:33.006 18:46:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:33.006 18:46:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:33.006 18:46:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:33.006 18:46:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:33.006 18:46:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:33.006 18:46:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:33.006 18:46:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:33.006 18:46:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:33.006 18:46:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:15:33.006 18:46:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:33.006 18:46:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:33.006 18:46:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:33.006 [2024-12-15 18:46:33.420234] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:33.006 /dev/nbd0 00:15:33.266 18:46:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:33.266 18:46:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:33.266 18:46:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:33.266 18:46:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:15:33.266 18:46:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:33.266 18:46:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:33.266 18:46:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:33.266 18:46:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:15:33.266 18:46:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:33.266 18:46:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:33.266 18:46:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:33.266 1+0 records in 00:15:33.266 1+0 records out 00:15:33.266 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000472805 s, 8.7 MB/s 00:15:33.266 18:46:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:33.266 18:46:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:15:33.266 18:46:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:33.266 18:46:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:33.266 18:46:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:15:33.266 18:46:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:33.266 18:46:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:33.266 18:46:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:15:33.266 18:46:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:15:33.266 18:46:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:15:33.835 7936+0 records in 00:15:33.835 7936+0 records out 00:15:33.835 32505856 bytes (33 MB, 31 MiB) copied, 0.558028 s, 58.3 MB/s 00:15:33.835 18:46:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:33.835 18:46:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:33.835 18:46:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:33.835 18:46:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:33.835 18:46:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:15:33.835 18:46:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:33.835 18:46:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:33.835 [2024-12-15 18:46:34.267184] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:34.095 18:46:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:34.095 18:46:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:34.095 18:46:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:34.095 18:46:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:34.095 18:46:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:34.095 18:46:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:34.095 18:46:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:15:34.095 18:46:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:15:34.095 18:46:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:34.095 18:46:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.095 18:46:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:34.095 [2024-12-15 18:46:34.309777] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:34.095 18:46:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.095 18:46:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:34.095 18:46:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:34.095 18:46:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:34.095 18:46:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:34.095 18:46:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:34.095 18:46:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:34.095 18:46:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:34.095 18:46:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:34.095 18:46:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:34.095 18:46:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:34.095 18:46:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.095 18:46:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:34.095 18:46:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.095 18:46:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:34.095 18:46:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.095 18:46:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:34.095 "name": "raid_bdev1", 00:15:34.095 "uuid": "b6bfefbb-96c1-4e19-8853-f75912959668", 00:15:34.095 "strip_size_kb": 0, 00:15:34.095 "state": "online", 00:15:34.095 "raid_level": "raid1", 00:15:34.095 "superblock": true, 00:15:34.095 "num_base_bdevs": 2, 00:15:34.095 "num_base_bdevs_discovered": 1, 00:15:34.095 "num_base_bdevs_operational": 1, 00:15:34.095 "base_bdevs_list": [ 00:15:34.095 { 00:15:34.095 "name": null, 00:15:34.095 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.095 "is_configured": false, 00:15:34.095 "data_offset": 0, 00:15:34.095 "data_size": 7936 00:15:34.095 }, 00:15:34.095 { 00:15:34.095 "name": "BaseBdev2", 00:15:34.095 "uuid": "51bc37ca-c7ee-5b51-8c67-5ddfa1baa2a1", 00:15:34.095 "is_configured": true, 00:15:34.095 "data_offset": 256, 00:15:34.095 "data_size": 7936 00:15:34.095 } 00:15:34.095 ] 00:15:34.095 }' 00:15:34.095 18:46:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:34.095 18:46:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:34.355 18:46:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:34.355 18:46:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.355 18:46:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:34.355 [2024-12-15 18:46:34.709079] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:34.355 [2024-12-15 18:46:34.711572] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d0c0 00:15:34.355 [2024-12-15 18:46:34.713404] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:34.355 18:46:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.355 18:46:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:35.306 18:46:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:35.306 18:46:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:35.306 18:46:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:35.306 18:46:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:35.306 18:46:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:35.306 18:46:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.306 18:46:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:35.306 18:46:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.306 18:46:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:35.584 18:46:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.584 18:46:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:35.584 "name": "raid_bdev1", 00:15:35.584 "uuid": "b6bfefbb-96c1-4e19-8853-f75912959668", 00:15:35.584 "strip_size_kb": 0, 00:15:35.584 "state": "online", 00:15:35.584 "raid_level": "raid1", 00:15:35.584 "superblock": true, 00:15:35.584 "num_base_bdevs": 2, 00:15:35.584 "num_base_bdevs_discovered": 2, 00:15:35.584 "num_base_bdevs_operational": 2, 00:15:35.584 "process": { 00:15:35.584 "type": "rebuild", 00:15:35.584 "target": "spare", 00:15:35.584 "progress": { 00:15:35.584 "blocks": 2560, 00:15:35.584 "percent": 32 00:15:35.584 } 00:15:35.584 }, 00:15:35.584 "base_bdevs_list": [ 00:15:35.584 { 00:15:35.584 "name": "spare", 00:15:35.584 "uuid": "f01de897-6257-52f4-b068-931b8bbd21f7", 00:15:35.584 "is_configured": true, 00:15:35.584 "data_offset": 256, 00:15:35.584 "data_size": 7936 00:15:35.584 }, 00:15:35.584 { 00:15:35.584 "name": "BaseBdev2", 00:15:35.584 "uuid": "51bc37ca-c7ee-5b51-8c67-5ddfa1baa2a1", 00:15:35.584 "is_configured": true, 00:15:35.584 "data_offset": 256, 00:15:35.584 "data_size": 7936 00:15:35.584 } 00:15:35.584 ] 00:15:35.584 }' 00:15:35.584 18:46:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:35.584 18:46:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:35.584 18:46:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:35.584 18:46:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:35.584 18:46:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:35.584 18:46:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.584 18:46:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:35.584 [2024-12-15 18:46:35.876887] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:35.584 [2024-12-15 18:46:35.918051] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:35.584 [2024-12-15 18:46:35.918166] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:35.584 [2024-12-15 18:46:35.918210] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:35.584 [2024-12-15 18:46:35.918235] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:35.584 18:46:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.584 18:46:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:35.584 18:46:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:35.584 18:46:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:35.584 18:46:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:35.584 18:46:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:35.584 18:46:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:35.584 18:46:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:35.584 18:46:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:35.584 18:46:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:35.584 18:46:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:35.584 18:46:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.584 18:46:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:35.584 18:46:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.584 18:46:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:35.584 18:46:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.584 18:46:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:35.584 "name": "raid_bdev1", 00:15:35.584 "uuid": "b6bfefbb-96c1-4e19-8853-f75912959668", 00:15:35.584 "strip_size_kb": 0, 00:15:35.584 "state": "online", 00:15:35.584 "raid_level": "raid1", 00:15:35.584 "superblock": true, 00:15:35.584 "num_base_bdevs": 2, 00:15:35.584 "num_base_bdevs_discovered": 1, 00:15:35.584 "num_base_bdevs_operational": 1, 00:15:35.584 "base_bdevs_list": [ 00:15:35.584 { 00:15:35.584 "name": null, 00:15:35.584 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:35.584 "is_configured": false, 00:15:35.584 "data_offset": 0, 00:15:35.584 "data_size": 7936 00:15:35.584 }, 00:15:35.584 { 00:15:35.584 "name": "BaseBdev2", 00:15:35.584 "uuid": "51bc37ca-c7ee-5b51-8c67-5ddfa1baa2a1", 00:15:35.584 "is_configured": true, 00:15:35.584 "data_offset": 256, 00:15:35.584 "data_size": 7936 00:15:35.584 } 00:15:35.584 ] 00:15:35.584 }' 00:15:35.585 18:46:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:35.585 18:46:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:36.171 18:46:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:36.171 18:46:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:36.171 18:46:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:36.171 18:46:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:36.171 18:46:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:36.171 18:46:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:36.171 18:46:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:36.171 18:46:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.171 18:46:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:36.171 18:46:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.171 18:46:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:36.171 "name": "raid_bdev1", 00:15:36.171 "uuid": "b6bfefbb-96c1-4e19-8853-f75912959668", 00:15:36.171 "strip_size_kb": 0, 00:15:36.171 "state": "online", 00:15:36.171 "raid_level": "raid1", 00:15:36.171 "superblock": true, 00:15:36.171 "num_base_bdevs": 2, 00:15:36.171 "num_base_bdevs_discovered": 1, 00:15:36.171 "num_base_bdevs_operational": 1, 00:15:36.171 "base_bdevs_list": [ 00:15:36.171 { 00:15:36.171 "name": null, 00:15:36.171 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:36.171 "is_configured": false, 00:15:36.171 "data_offset": 0, 00:15:36.171 "data_size": 7936 00:15:36.171 }, 00:15:36.171 { 00:15:36.171 "name": "BaseBdev2", 00:15:36.171 "uuid": "51bc37ca-c7ee-5b51-8c67-5ddfa1baa2a1", 00:15:36.171 "is_configured": true, 00:15:36.171 "data_offset": 256, 00:15:36.171 "data_size": 7936 00:15:36.171 } 00:15:36.171 ] 00:15:36.171 }' 00:15:36.171 18:46:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:36.171 18:46:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:36.171 18:46:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:36.171 18:46:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:36.171 18:46:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:36.171 18:46:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.171 18:46:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:36.171 [2024-12-15 18:46:36.580330] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:36.171 [2024-12-15 18:46:36.582861] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d190 00:15:36.171 [2024-12-15 18:46:36.584668] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:36.171 18:46:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.171 18:46:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:37.553 18:46:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:37.553 18:46:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:37.553 18:46:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:37.553 18:46:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:37.553 18:46:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:37.553 18:46:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.553 18:46:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:37.553 18:46:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.553 18:46:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:37.553 18:46:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.553 18:46:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:37.553 "name": "raid_bdev1", 00:15:37.553 "uuid": "b6bfefbb-96c1-4e19-8853-f75912959668", 00:15:37.553 "strip_size_kb": 0, 00:15:37.553 "state": "online", 00:15:37.553 "raid_level": "raid1", 00:15:37.553 "superblock": true, 00:15:37.553 "num_base_bdevs": 2, 00:15:37.553 "num_base_bdevs_discovered": 2, 00:15:37.553 "num_base_bdevs_operational": 2, 00:15:37.553 "process": { 00:15:37.553 "type": "rebuild", 00:15:37.553 "target": "spare", 00:15:37.553 "progress": { 00:15:37.553 "blocks": 2560, 00:15:37.553 "percent": 32 00:15:37.553 } 00:15:37.553 }, 00:15:37.553 "base_bdevs_list": [ 00:15:37.553 { 00:15:37.553 "name": "spare", 00:15:37.553 "uuid": "f01de897-6257-52f4-b068-931b8bbd21f7", 00:15:37.553 "is_configured": true, 00:15:37.553 "data_offset": 256, 00:15:37.553 "data_size": 7936 00:15:37.553 }, 00:15:37.553 { 00:15:37.553 "name": "BaseBdev2", 00:15:37.553 "uuid": "51bc37ca-c7ee-5b51-8c67-5ddfa1baa2a1", 00:15:37.553 "is_configured": true, 00:15:37.553 "data_offset": 256, 00:15:37.553 "data_size": 7936 00:15:37.553 } 00:15:37.553 ] 00:15:37.553 }' 00:15:37.553 18:46:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:37.553 18:46:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:37.553 18:46:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:37.553 18:46:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:37.553 18:46:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:15:37.553 18:46:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:15:37.553 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:15:37.553 18:46:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:15:37.553 18:46:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:15:37.553 18:46:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:15:37.553 18:46:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # local timeout=594 00:15:37.553 18:46:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:37.553 18:46:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:37.553 18:46:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:37.553 18:46:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:37.553 18:46:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:37.553 18:46:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:37.553 18:46:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.553 18:46:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:37.553 18:46:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.553 18:46:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:37.553 18:46:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.553 18:46:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:37.553 "name": "raid_bdev1", 00:15:37.553 "uuid": "b6bfefbb-96c1-4e19-8853-f75912959668", 00:15:37.553 "strip_size_kb": 0, 00:15:37.553 "state": "online", 00:15:37.553 "raid_level": "raid1", 00:15:37.553 "superblock": true, 00:15:37.553 "num_base_bdevs": 2, 00:15:37.553 "num_base_bdevs_discovered": 2, 00:15:37.553 "num_base_bdevs_operational": 2, 00:15:37.553 "process": { 00:15:37.553 "type": "rebuild", 00:15:37.553 "target": "spare", 00:15:37.553 "progress": { 00:15:37.553 "blocks": 2816, 00:15:37.553 "percent": 35 00:15:37.553 } 00:15:37.553 }, 00:15:37.553 "base_bdevs_list": [ 00:15:37.553 { 00:15:37.553 "name": "spare", 00:15:37.553 "uuid": "f01de897-6257-52f4-b068-931b8bbd21f7", 00:15:37.553 "is_configured": true, 00:15:37.553 "data_offset": 256, 00:15:37.553 "data_size": 7936 00:15:37.553 }, 00:15:37.553 { 00:15:37.553 "name": "BaseBdev2", 00:15:37.553 "uuid": "51bc37ca-c7ee-5b51-8c67-5ddfa1baa2a1", 00:15:37.553 "is_configured": true, 00:15:37.553 "data_offset": 256, 00:15:37.553 "data_size": 7936 00:15:37.553 } 00:15:37.553 ] 00:15:37.553 }' 00:15:37.553 18:46:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:37.553 18:46:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:37.553 18:46:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:37.553 18:46:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:37.553 18:46:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:38.493 18:46:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:38.493 18:46:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:38.493 18:46:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:38.493 18:46:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:38.493 18:46:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:38.493 18:46:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:38.493 18:46:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.493 18:46:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.493 18:46:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:38.493 18:46:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:38.493 18:46:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.752 18:46:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:38.752 "name": "raid_bdev1", 00:15:38.752 "uuid": "b6bfefbb-96c1-4e19-8853-f75912959668", 00:15:38.752 "strip_size_kb": 0, 00:15:38.752 "state": "online", 00:15:38.752 "raid_level": "raid1", 00:15:38.752 "superblock": true, 00:15:38.752 "num_base_bdevs": 2, 00:15:38.752 "num_base_bdevs_discovered": 2, 00:15:38.752 "num_base_bdevs_operational": 2, 00:15:38.752 "process": { 00:15:38.752 "type": "rebuild", 00:15:38.752 "target": "spare", 00:15:38.752 "progress": { 00:15:38.752 "blocks": 5888, 00:15:38.752 "percent": 74 00:15:38.752 } 00:15:38.752 }, 00:15:38.752 "base_bdevs_list": [ 00:15:38.752 { 00:15:38.752 "name": "spare", 00:15:38.752 "uuid": "f01de897-6257-52f4-b068-931b8bbd21f7", 00:15:38.752 "is_configured": true, 00:15:38.752 "data_offset": 256, 00:15:38.752 "data_size": 7936 00:15:38.752 }, 00:15:38.752 { 00:15:38.752 "name": "BaseBdev2", 00:15:38.752 "uuid": "51bc37ca-c7ee-5b51-8c67-5ddfa1baa2a1", 00:15:38.752 "is_configured": true, 00:15:38.752 "data_offset": 256, 00:15:38.752 "data_size": 7936 00:15:38.752 } 00:15:38.752 ] 00:15:38.752 }' 00:15:38.752 18:46:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:38.752 18:46:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:38.752 18:46:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:38.752 18:46:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:38.752 18:46:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:39.322 [2024-12-15 18:46:39.695297] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:39.322 [2024-12-15 18:46:39.695368] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:39.322 [2024-12-15 18:46:39.695460] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:39.927 18:46:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:39.927 18:46:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:39.927 18:46:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:39.927 18:46:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:39.927 18:46:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:39.927 18:46:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:39.927 18:46:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.927 18:46:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.927 18:46:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:39.927 18:46:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:39.927 18:46:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.927 18:46:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:39.927 "name": "raid_bdev1", 00:15:39.927 "uuid": "b6bfefbb-96c1-4e19-8853-f75912959668", 00:15:39.927 "strip_size_kb": 0, 00:15:39.927 "state": "online", 00:15:39.927 "raid_level": "raid1", 00:15:39.927 "superblock": true, 00:15:39.927 "num_base_bdevs": 2, 00:15:39.927 "num_base_bdevs_discovered": 2, 00:15:39.927 "num_base_bdevs_operational": 2, 00:15:39.927 "base_bdevs_list": [ 00:15:39.927 { 00:15:39.927 "name": "spare", 00:15:39.927 "uuid": "f01de897-6257-52f4-b068-931b8bbd21f7", 00:15:39.927 "is_configured": true, 00:15:39.927 "data_offset": 256, 00:15:39.927 "data_size": 7936 00:15:39.927 }, 00:15:39.927 { 00:15:39.927 "name": "BaseBdev2", 00:15:39.927 "uuid": "51bc37ca-c7ee-5b51-8c67-5ddfa1baa2a1", 00:15:39.927 "is_configured": true, 00:15:39.927 "data_offset": 256, 00:15:39.927 "data_size": 7936 00:15:39.927 } 00:15:39.927 ] 00:15:39.927 }' 00:15:39.927 18:46:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:39.927 18:46:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:39.927 18:46:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:39.927 18:46:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:39.927 18:46:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@709 -- # break 00:15:39.927 18:46:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:39.927 18:46:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:39.927 18:46:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:39.927 18:46:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:39.927 18:46:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:39.927 18:46:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.927 18:46:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:39.927 18:46:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.927 18:46:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:39.927 18:46:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.927 18:46:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:39.927 "name": "raid_bdev1", 00:15:39.927 "uuid": "b6bfefbb-96c1-4e19-8853-f75912959668", 00:15:39.927 "strip_size_kb": 0, 00:15:39.927 "state": "online", 00:15:39.927 "raid_level": "raid1", 00:15:39.927 "superblock": true, 00:15:39.927 "num_base_bdevs": 2, 00:15:39.927 "num_base_bdevs_discovered": 2, 00:15:39.927 "num_base_bdevs_operational": 2, 00:15:39.927 "base_bdevs_list": [ 00:15:39.927 { 00:15:39.927 "name": "spare", 00:15:39.927 "uuid": "f01de897-6257-52f4-b068-931b8bbd21f7", 00:15:39.927 "is_configured": true, 00:15:39.927 "data_offset": 256, 00:15:39.927 "data_size": 7936 00:15:39.927 }, 00:15:39.927 { 00:15:39.927 "name": "BaseBdev2", 00:15:39.927 "uuid": "51bc37ca-c7ee-5b51-8c67-5ddfa1baa2a1", 00:15:39.927 "is_configured": true, 00:15:39.927 "data_offset": 256, 00:15:39.927 "data_size": 7936 00:15:39.927 } 00:15:39.927 ] 00:15:39.927 }' 00:15:39.927 18:46:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:39.927 18:46:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:39.927 18:46:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:39.927 18:46:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:39.927 18:46:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:39.927 18:46:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:39.927 18:46:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:39.927 18:46:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:39.927 18:46:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:39.927 18:46:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:39.927 18:46:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:39.927 18:46:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:39.927 18:46:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:39.927 18:46:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:39.927 18:46:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:39.927 18:46:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.927 18:46:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.927 18:46:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:39.927 18:46:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.187 18:46:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:40.187 "name": "raid_bdev1", 00:15:40.187 "uuid": "b6bfefbb-96c1-4e19-8853-f75912959668", 00:15:40.187 "strip_size_kb": 0, 00:15:40.187 "state": "online", 00:15:40.187 "raid_level": "raid1", 00:15:40.187 "superblock": true, 00:15:40.187 "num_base_bdevs": 2, 00:15:40.187 "num_base_bdevs_discovered": 2, 00:15:40.187 "num_base_bdevs_operational": 2, 00:15:40.187 "base_bdevs_list": [ 00:15:40.187 { 00:15:40.187 "name": "spare", 00:15:40.187 "uuid": "f01de897-6257-52f4-b068-931b8bbd21f7", 00:15:40.187 "is_configured": true, 00:15:40.187 "data_offset": 256, 00:15:40.187 "data_size": 7936 00:15:40.187 }, 00:15:40.187 { 00:15:40.187 "name": "BaseBdev2", 00:15:40.187 "uuid": "51bc37ca-c7ee-5b51-8c67-5ddfa1baa2a1", 00:15:40.187 "is_configured": true, 00:15:40.187 "data_offset": 256, 00:15:40.187 "data_size": 7936 00:15:40.187 } 00:15:40.187 ] 00:15:40.187 }' 00:15:40.187 18:46:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:40.187 18:46:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:40.447 18:46:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:40.447 18:46:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.447 18:46:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:40.447 [2024-12-15 18:46:40.752430] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:40.447 [2024-12-15 18:46:40.752528] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:40.447 [2024-12-15 18:46:40.752607] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:40.447 [2024-12-15 18:46:40.752689] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:40.447 [2024-12-15 18:46:40.752702] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:15:40.447 18:46:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.447 18:46:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.447 18:46:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.447 18:46:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:40.447 18:46:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # jq length 00:15:40.447 18:46:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.447 18:46:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:40.447 18:46:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:40.447 18:46:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:15:40.447 18:46:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:15:40.447 18:46:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:40.447 18:46:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:15:40.447 18:46:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:40.447 18:46:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:40.447 18:46:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:40.447 18:46:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:15:40.447 18:46:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:40.447 18:46:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:40.447 18:46:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:15:40.707 /dev/nbd0 00:15:40.707 18:46:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:40.707 18:46:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:40.707 18:46:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:40.707 18:46:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:15:40.707 18:46:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:40.707 18:46:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:40.707 18:46:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:40.707 18:46:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:15:40.707 18:46:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:40.707 18:46:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:40.707 18:46:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:40.707 1+0 records in 00:15:40.707 1+0 records out 00:15:40.707 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000370784 s, 11.0 MB/s 00:15:40.707 18:46:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:40.707 18:46:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:15:40.707 18:46:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:40.707 18:46:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:40.707 18:46:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:15:40.707 18:46:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:40.707 18:46:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:40.708 18:46:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:15:40.968 /dev/nbd1 00:15:40.968 18:46:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:40.968 18:46:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:40.968 18:46:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:40.968 18:46:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:15:40.968 18:46:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:40.968 18:46:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:40.968 18:46:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:40.968 18:46:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:15:40.968 18:46:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:40.968 18:46:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:40.968 18:46:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:40.968 1+0 records in 00:15:40.968 1+0 records out 00:15:40.968 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00036824 s, 11.1 MB/s 00:15:40.968 18:46:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:40.968 18:46:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:15:40.968 18:46:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:40.968 18:46:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:40.968 18:46:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:15:40.968 18:46:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:40.968 18:46:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:40.968 18:46:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:15:40.968 18:46:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:15:40.968 18:46:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:40.968 18:46:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:40.968 18:46:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:40.968 18:46:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:15:40.968 18:46:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:40.968 18:46:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:41.227 18:46:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:41.227 18:46:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:41.227 18:46:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:41.227 18:46:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:41.227 18:46:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:41.227 18:46:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:41.227 18:46:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:15:41.227 18:46:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:15:41.227 18:46:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:41.227 18:46:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:41.488 18:46:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:41.488 18:46:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:41.488 18:46:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:41.488 18:46:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:41.488 18:46:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:41.488 18:46:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:41.488 18:46:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:15:41.488 18:46:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:15:41.488 18:46:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:15:41.488 18:46:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:15:41.488 18:46:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.488 18:46:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:41.488 18:46:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.488 18:46:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:41.488 18:46:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.488 18:46:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:41.488 [2024-12-15 18:46:41.862782] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:41.488 [2024-12-15 18:46:41.862935] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:41.488 [2024-12-15 18:46:41.862974] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:15:41.488 [2024-12-15 18:46:41.863006] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:41.488 [2024-12-15 18:46:41.864890] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:41.488 [2024-12-15 18:46:41.864968] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:41.488 [2024-12-15 18:46:41.865048] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:41.488 [2024-12-15 18:46:41.865111] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:41.488 [2024-12-15 18:46:41.865230] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:41.488 spare 00:15:41.488 18:46:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.488 18:46:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:15:41.488 18:46:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.488 18:46:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:41.748 [2024-12-15 18:46:41.965179] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:15:41.748 [2024-12-15 18:46:41.965247] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:41.748 [2024-12-15 18:46:41.965364] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c19b0 00:15:41.748 [2024-12-15 18:46:41.965496] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:15:41.748 [2024-12-15 18:46:41.965543] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006600 00:15:41.748 [2024-12-15 18:46:41.965690] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:41.748 18:46:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.748 18:46:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:41.748 18:46:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:41.748 18:46:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:41.748 18:46:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:41.748 18:46:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:41.748 18:46:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:41.748 18:46:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:41.748 18:46:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:41.748 18:46:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:41.748 18:46:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:41.748 18:46:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:41.748 18:46:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.748 18:46:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.748 18:46:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:41.748 18:46:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.748 18:46:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:41.748 "name": "raid_bdev1", 00:15:41.748 "uuid": "b6bfefbb-96c1-4e19-8853-f75912959668", 00:15:41.748 "strip_size_kb": 0, 00:15:41.748 "state": "online", 00:15:41.748 "raid_level": "raid1", 00:15:41.748 "superblock": true, 00:15:41.748 "num_base_bdevs": 2, 00:15:41.748 "num_base_bdevs_discovered": 2, 00:15:41.748 "num_base_bdevs_operational": 2, 00:15:41.748 "base_bdevs_list": [ 00:15:41.748 { 00:15:41.748 "name": "spare", 00:15:41.748 "uuid": "f01de897-6257-52f4-b068-931b8bbd21f7", 00:15:41.748 "is_configured": true, 00:15:41.748 "data_offset": 256, 00:15:41.748 "data_size": 7936 00:15:41.748 }, 00:15:41.748 { 00:15:41.748 "name": "BaseBdev2", 00:15:41.748 "uuid": "51bc37ca-c7ee-5b51-8c67-5ddfa1baa2a1", 00:15:41.748 "is_configured": true, 00:15:41.748 "data_offset": 256, 00:15:41.748 "data_size": 7936 00:15:41.748 } 00:15:41.748 ] 00:15:41.748 }' 00:15:41.748 18:46:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:41.748 18:46:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:42.007 18:46:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:42.007 18:46:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:42.007 18:46:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:42.007 18:46:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:42.007 18:46:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:42.007 18:46:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.007 18:46:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:42.007 18:46:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.007 18:46:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:42.007 18:46:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.267 18:46:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:42.267 "name": "raid_bdev1", 00:15:42.267 "uuid": "b6bfefbb-96c1-4e19-8853-f75912959668", 00:15:42.267 "strip_size_kb": 0, 00:15:42.267 "state": "online", 00:15:42.267 "raid_level": "raid1", 00:15:42.267 "superblock": true, 00:15:42.267 "num_base_bdevs": 2, 00:15:42.267 "num_base_bdevs_discovered": 2, 00:15:42.267 "num_base_bdevs_operational": 2, 00:15:42.267 "base_bdevs_list": [ 00:15:42.267 { 00:15:42.267 "name": "spare", 00:15:42.267 "uuid": "f01de897-6257-52f4-b068-931b8bbd21f7", 00:15:42.267 "is_configured": true, 00:15:42.267 "data_offset": 256, 00:15:42.267 "data_size": 7936 00:15:42.267 }, 00:15:42.267 { 00:15:42.267 "name": "BaseBdev2", 00:15:42.267 "uuid": "51bc37ca-c7ee-5b51-8c67-5ddfa1baa2a1", 00:15:42.267 "is_configured": true, 00:15:42.267 "data_offset": 256, 00:15:42.267 "data_size": 7936 00:15:42.267 } 00:15:42.267 ] 00:15:42.267 }' 00:15:42.267 18:46:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:42.267 18:46:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:42.267 18:46:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:42.267 18:46:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:42.267 18:46:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:15:42.267 18:46:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.267 18:46:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.267 18:46:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:42.267 18:46:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.267 18:46:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:15:42.267 18:46:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:42.267 18:46:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.267 18:46:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:42.267 [2024-12-15 18:46:42.601587] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:42.267 18:46:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.267 18:46:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:42.267 18:46:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:42.267 18:46:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:42.267 18:46:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:42.267 18:46:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:42.268 18:46:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:42.268 18:46:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:42.268 18:46:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:42.268 18:46:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:42.268 18:46:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:42.268 18:46:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.268 18:46:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.268 18:46:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:42.268 18:46:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:42.268 18:46:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.268 18:46:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:42.268 "name": "raid_bdev1", 00:15:42.268 "uuid": "b6bfefbb-96c1-4e19-8853-f75912959668", 00:15:42.268 "strip_size_kb": 0, 00:15:42.268 "state": "online", 00:15:42.268 "raid_level": "raid1", 00:15:42.268 "superblock": true, 00:15:42.268 "num_base_bdevs": 2, 00:15:42.268 "num_base_bdevs_discovered": 1, 00:15:42.268 "num_base_bdevs_operational": 1, 00:15:42.268 "base_bdevs_list": [ 00:15:42.268 { 00:15:42.268 "name": null, 00:15:42.268 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:42.268 "is_configured": false, 00:15:42.268 "data_offset": 0, 00:15:42.268 "data_size": 7936 00:15:42.268 }, 00:15:42.268 { 00:15:42.268 "name": "BaseBdev2", 00:15:42.268 "uuid": "51bc37ca-c7ee-5b51-8c67-5ddfa1baa2a1", 00:15:42.268 "is_configured": true, 00:15:42.268 "data_offset": 256, 00:15:42.268 "data_size": 7936 00:15:42.268 } 00:15:42.268 ] 00:15:42.268 }' 00:15:42.268 18:46:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:42.268 18:46:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:42.837 18:46:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:42.837 18:46:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.837 18:46:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:42.837 [2024-12-15 18:46:43.016907] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:42.837 [2024-12-15 18:46:43.017121] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:42.837 [2024-12-15 18:46:43.017197] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:42.837 [2024-12-15 18:46:43.017260] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:42.837 [2024-12-15 18:46:43.019712] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1a80 00:15:42.837 [2024-12-15 18:46:43.021569] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:42.837 18:46:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.837 18:46:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@757 -- # sleep 1 00:15:43.775 18:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:43.775 18:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:43.775 18:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:43.775 18:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:43.775 18:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:43.775 18:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.776 18:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:43.776 18:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.776 18:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:43.776 18:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.776 18:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:43.776 "name": "raid_bdev1", 00:15:43.776 "uuid": "b6bfefbb-96c1-4e19-8853-f75912959668", 00:15:43.776 "strip_size_kb": 0, 00:15:43.776 "state": "online", 00:15:43.776 "raid_level": "raid1", 00:15:43.776 "superblock": true, 00:15:43.776 "num_base_bdevs": 2, 00:15:43.776 "num_base_bdevs_discovered": 2, 00:15:43.776 "num_base_bdevs_operational": 2, 00:15:43.776 "process": { 00:15:43.776 "type": "rebuild", 00:15:43.776 "target": "spare", 00:15:43.776 "progress": { 00:15:43.776 "blocks": 2560, 00:15:43.776 "percent": 32 00:15:43.776 } 00:15:43.776 }, 00:15:43.776 "base_bdevs_list": [ 00:15:43.776 { 00:15:43.776 "name": "spare", 00:15:43.776 "uuid": "f01de897-6257-52f4-b068-931b8bbd21f7", 00:15:43.776 "is_configured": true, 00:15:43.776 "data_offset": 256, 00:15:43.776 "data_size": 7936 00:15:43.776 }, 00:15:43.776 { 00:15:43.776 "name": "BaseBdev2", 00:15:43.776 "uuid": "51bc37ca-c7ee-5b51-8c67-5ddfa1baa2a1", 00:15:43.776 "is_configured": true, 00:15:43.776 "data_offset": 256, 00:15:43.776 "data_size": 7936 00:15:43.776 } 00:15:43.776 ] 00:15:43.776 }' 00:15:43.776 18:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:43.776 18:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:43.776 18:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:43.776 18:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:43.776 18:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:15:43.776 18:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.776 18:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:43.776 [2024-12-15 18:46:44.184958] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:44.035 [2024-12-15 18:46:44.225729] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:44.035 [2024-12-15 18:46:44.225785] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:44.035 [2024-12-15 18:46:44.225812] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:44.035 [2024-12-15 18:46:44.225820] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:44.035 18:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.035 18:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:44.035 18:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:44.036 18:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:44.036 18:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:44.036 18:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:44.036 18:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:44.036 18:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:44.036 18:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:44.036 18:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:44.036 18:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:44.036 18:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.036 18:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:44.036 18:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.036 18:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:44.036 18:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.036 18:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:44.036 "name": "raid_bdev1", 00:15:44.036 "uuid": "b6bfefbb-96c1-4e19-8853-f75912959668", 00:15:44.036 "strip_size_kb": 0, 00:15:44.036 "state": "online", 00:15:44.036 "raid_level": "raid1", 00:15:44.036 "superblock": true, 00:15:44.036 "num_base_bdevs": 2, 00:15:44.036 "num_base_bdevs_discovered": 1, 00:15:44.036 "num_base_bdevs_operational": 1, 00:15:44.036 "base_bdevs_list": [ 00:15:44.036 { 00:15:44.036 "name": null, 00:15:44.036 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:44.036 "is_configured": false, 00:15:44.036 "data_offset": 0, 00:15:44.036 "data_size": 7936 00:15:44.036 }, 00:15:44.036 { 00:15:44.036 "name": "BaseBdev2", 00:15:44.036 "uuid": "51bc37ca-c7ee-5b51-8c67-5ddfa1baa2a1", 00:15:44.036 "is_configured": true, 00:15:44.036 "data_offset": 256, 00:15:44.036 "data_size": 7936 00:15:44.036 } 00:15:44.036 ] 00:15:44.036 }' 00:15:44.036 18:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:44.036 18:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:44.295 18:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:44.295 18:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.295 18:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:44.295 [2024-12-15 18:46:44.676451] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:44.295 [2024-12-15 18:46:44.676575] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:44.295 [2024-12-15 18:46:44.676621] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:15:44.295 [2024-12-15 18:46:44.676648] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:44.295 [2024-12-15 18:46:44.676909] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:44.295 [2024-12-15 18:46:44.676969] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:44.295 [2024-12-15 18:46:44.677057] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:44.295 [2024-12-15 18:46:44.677094] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:44.295 [2024-12-15 18:46:44.677136] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:44.295 [2024-12-15 18:46:44.677204] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:44.295 [2024-12-15 18:46:44.679517] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:15:44.295 [2024-12-15 18:46:44.681318] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:44.295 spare 00:15:44.295 18:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.295 18:46:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@764 -- # sleep 1 00:15:45.677 18:46:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:45.677 18:46:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:45.677 18:46:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:45.677 18:46:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:45.677 18:46:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:45.677 18:46:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.677 18:46:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.677 18:46:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:45.677 18:46:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:45.677 18:46:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.677 18:46:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:45.677 "name": "raid_bdev1", 00:15:45.677 "uuid": "b6bfefbb-96c1-4e19-8853-f75912959668", 00:15:45.677 "strip_size_kb": 0, 00:15:45.677 "state": "online", 00:15:45.677 "raid_level": "raid1", 00:15:45.677 "superblock": true, 00:15:45.677 "num_base_bdevs": 2, 00:15:45.677 "num_base_bdevs_discovered": 2, 00:15:45.677 "num_base_bdevs_operational": 2, 00:15:45.677 "process": { 00:15:45.677 "type": "rebuild", 00:15:45.677 "target": "spare", 00:15:45.677 "progress": { 00:15:45.677 "blocks": 2560, 00:15:45.677 "percent": 32 00:15:45.677 } 00:15:45.677 }, 00:15:45.677 "base_bdevs_list": [ 00:15:45.677 { 00:15:45.677 "name": "spare", 00:15:45.677 "uuid": "f01de897-6257-52f4-b068-931b8bbd21f7", 00:15:45.677 "is_configured": true, 00:15:45.677 "data_offset": 256, 00:15:45.677 "data_size": 7936 00:15:45.677 }, 00:15:45.677 { 00:15:45.677 "name": "BaseBdev2", 00:15:45.677 "uuid": "51bc37ca-c7ee-5b51-8c67-5ddfa1baa2a1", 00:15:45.677 "is_configured": true, 00:15:45.677 "data_offset": 256, 00:15:45.677 "data_size": 7936 00:15:45.677 } 00:15:45.677 ] 00:15:45.677 }' 00:15:45.677 18:46:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:45.677 18:46:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:45.677 18:46:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:45.677 18:46:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:45.677 18:46:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:15:45.677 18:46:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.677 18:46:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:45.677 [2024-12-15 18:46:45.845115] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:45.677 [2024-12-15 18:46:45.885426] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:45.677 [2024-12-15 18:46:45.885486] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:45.677 [2024-12-15 18:46:45.885500] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:45.678 [2024-12-15 18:46:45.885509] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:45.678 18:46:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.678 18:46:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:45.678 18:46:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:45.678 18:46:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:45.678 18:46:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:45.678 18:46:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:45.678 18:46:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:45.678 18:46:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:45.678 18:46:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:45.678 18:46:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:45.678 18:46:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:45.678 18:46:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.678 18:46:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:45.678 18:46:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.678 18:46:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:45.678 18:46:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.678 18:46:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:45.678 "name": "raid_bdev1", 00:15:45.678 "uuid": "b6bfefbb-96c1-4e19-8853-f75912959668", 00:15:45.678 "strip_size_kb": 0, 00:15:45.678 "state": "online", 00:15:45.678 "raid_level": "raid1", 00:15:45.678 "superblock": true, 00:15:45.678 "num_base_bdevs": 2, 00:15:45.678 "num_base_bdevs_discovered": 1, 00:15:45.678 "num_base_bdevs_operational": 1, 00:15:45.678 "base_bdevs_list": [ 00:15:45.678 { 00:15:45.678 "name": null, 00:15:45.678 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:45.678 "is_configured": false, 00:15:45.678 "data_offset": 0, 00:15:45.678 "data_size": 7936 00:15:45.678 }, 00:15:45.678 { 00:15:45.678 "name": "BaseBdev2", 00:15:45.678 "uuid": "51bc37ca-c7ee-5b51-8c67-5ddfa1baa2a1", 00:15:45.678 "is_configured": true, 00:15:45.678 "data_offset": 256, 00:15:45.678 "data_size": 7936 00:15:45.678 } 00:15:45.678 ] 00:15:45.678 }' 00:15:45.678 18:46:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:45.678 18:46:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:45.937 18:46:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:45.937 18:46:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:45.937 18:46:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:45.937 18:46:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:45.937 18:46:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:45.937 18:46:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.937 18:46:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.937 18:46:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:45.937 18:46:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:45.937 18:46:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.197 18:46:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:46.197 "name": "raid_bdev1", 00:15:46.197 "uuid": "b6bfefbb-96c1-4e19-8853-f75912959668", 00:15:46.197 "strip_size_kb": 0, 00:15:46.197 "state": "online", 00:15:46.197 "raid_level": "raid1", 00:15:46.197 "superblock": true, 00:15:46.197 "num_base_bdevs": 2, 00:15:46.197 "num_base_bdevs_discovered": 1, 00:15:46.197 "num_base_bdevs_operational": 1, 00:15:46.197 "base_bdevs_list": [ 00:15:46.197 { 00:15:46.197 "name": null, 00:15:46.197 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:46.197 "is_configured": false, 00:15:46.197 "data_offset": 0, 00:15:46.197 "data_size": 7936 00:15:46.197 }, 00:15:46.197 { 00:15:46.197 "name": "BaseBdev2", 00:15:46.197 "uuid": "51bc37ca-c7ee-5b51-8c67-5ddfa1baa2a1", 00:15:46.197 "is_configured": true, 00:15:46.197 "data_offset": 256, 00:15:46.197 "data_size": 7936 00:15:46.197 } 00:15:46.197 ] 00:15:46.197 }' 00:15:46.197 18:46:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:46.197 18:46:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:46.197 18:46:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:46.197 18:46:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:46.197 18:46:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:15:46.197 18:46:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.197 18:46:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:46.197 18:46:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.197 18:46:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:46.197 18:46:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.197 18:46:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:46.197 [2024-12-15 18:46:46.480213] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:46.197 [2024-12-15 18:46:46.480266] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:46.197 [2024-12-15 18:46:46.480283] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:15:46.197 [2024-12-15 18:46:46.480293] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:46.197 [2024-12-15 18:46:46.480473] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:46.197 [2024-12-15 18:46:46.480488] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:46.197 [2024-12-15 18:46:46.480530] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:15:46.197 [2024-12-15 18:46:46.480548] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:46.197 [2024-12-15 18:46:46.480555] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:46.197 [2024-12-15 18:46:46.480567] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:15:46.197 BaseBdev1 00:15:46.197 18:46:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.197 18:46:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # sleep 1 00:15:47.135 18:46:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:47.135 18:46:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:47.135 18:46:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:47.135 18:46:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:47.135 18:46:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:47.135 18:46:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:47.135 18:46:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:47.135 18:46:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:47.135 18:46:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:47.135 18:46:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:47.135 18:46:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:47.136 18:46:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.136 18:46:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:47.136 18:46:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:47.136 18:46:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.136 18:46:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:47.136 "name": "raid_bdev1", 00:15:47.136 "uuid": "b6bfefbb-96c1-4e19-8853-f75912959668", 00:15:47.136 "strip_size_kb": 0, 00:15:47.136 "state": "online", 00:15:47.136 "raid_level": "raid1", 00:15:47.136 "superblock": true, 00:15:47.136 "num_base_bdevs": 2, 00:15:47.136 "num_base_bdevs_discovered": 1, 00:15:47.136 "num_base_bdevs_operational": 1, 00:15:47.136 "base_bdevs_list": [ 00:15:47.136 { 00:15:47.136 "name": null, 00:15:47.136 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:47.136 "is_configured": false, 00:15:47.136 "data_offset": 0, 00:15:47.136 "data_size": 7936 00:15:47.136 }, 00:15:47.136 { 00:15:47.136 "name": "BaseBdev2", 00:15:47.136 "uuid": "51bc37ca-c7ee-5b51-8c67-5ddfa1baa2a1", 00:15:47.136 "is_configured": true, 00:15:47.136 "data_offset": 256, 00:15:47.136 "data_size": 7936 00:15:47.136 } 00:15:47.136 ] 00:15:47.136 }' 00:15:47.136 18:46:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:47.136 18:46:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:47.705 18:46:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:47.705 18:46:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:47.705 18:46:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:47.705 18:46:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:47.705 18:46:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:47.705 18:46:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:47.705 18:46:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:47.705 18:46:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.705 18:46:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:47.705 18:46:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.705 18:46:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:47.705 "name": "raid_bdev1", 00:15:47.705 "uuid": "b6bfefbb-96c1-4e19-8853-f75912959668", 00:15:47.705 "strip_size_kb": 0, 00:15:47.705 "state": "online", 00:15:47.705 "raid_level": "raid1", 00:15:47.705 "superblock": true, 00:15:47.705 "num_base_bdevs": 2, 00:15:47.705 "num_base_bdevs_discovered": 1, 00:15:47.705 "num_base_bdevs_operational": 1, 00:15:47.705 "base_bdevs_list": [ 00:15:47.705 { 00:15:47.705 "name": null, 00:15:47.705 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:47.705 "is_configured": false, 00:15:47.705 "data_offset": 0, 00:15:47.705 "data_size": 7936 00:15:47.705 }, 00:15:47.705 { 00:15:47.705 "name": "BaseBdev2", 00:15:47.705 "uuid": "51bc37ca-c7ee-5b51-8c67-5ddfa1baa2a1", 00:15:47.705 "is_configured": true, 00:15:47.705 "data_offset": 256, 00:15:47.705 "data_size": 7936 00:15:47.705 } 00:15:47.705 ] 00:15:47.705 }' 00:15:47.705 18:46:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:47.705 18:46:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:47.705 18:46:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:47.705 18:46:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:47.705 18:46:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:47.705 18:46:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:15:47.705 18:46:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:47.705 18:46:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:47.705 18:46:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:47.705 18:46:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:47.705 18:46:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:47.705 18:46:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:47.705 18:46:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.705 18:46:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:47.705 [2024-12-15 18:46:48.129507] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:47.705 [2024-12-15 18:46:48.129673] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:47.705 [2024-12-15 18:46:48.129687] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:47.705 request: 00:15:47.705 { 00:15:47.705 "base_bdev": "BaseBdev1", 00:15:47.705 "raid_bdev": "raid_bdev1", 00:15:47.705 "method": "bdev_raid_add_base_bdev", 00:15:47.705 "req_id": 1 00:15:47.705 } 00:15:47.705 Got JSON-RPC error response 00:15:47.705 response: 00:15:47.705 { 00:15:47.705 "code": -22, 00:15:47.705 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:15:47.705 } 00:15:47.706 18:46:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:47.706 18:46:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # es=1 00:15:47.706 18:46:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:47.706 18:46:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:47.706 18:46:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:47.706 18:46:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # sleep 1 00:15:49.087 18:46:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:49.087 18:46:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:49.087 18:46:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:49.087 18:46:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:49.087 18:46:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:49.087 18:46:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:49.087 18:46:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:49.087 18:46:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:49.087 18:46:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:49.087 18:46:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:49.087 18:46:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.087 18:46:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.087 18:46:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:49.087 18:46:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:49.087 18:46:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.087 18:46:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:49.087 "name": "raid_bdev1", 00:15:49.087 "uuid": "b6bfefbb-96c1-4e19-8853-f75912959668", 00:15:49.087 "strip_size_kb": 0, 00:15:49.087 "state": "online", 00:15:49.087 "raid_level": "raid1", 00:15:49.087 "superblock": true, 00:15:49.087 "num_base_bdevs": 2, 00:15:49.087 "num_base_bdevs_discovered": 1, 00:15:49.087 "num_base_bdevs_operational": 1, 00:15:49.087 "base_bdevs_list": [ 00:15:49.087 { 00:15:49.087 "name": null, 00:15:49.087 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:49.087 "is_configured": false, 00:15:49.087 "data_offset": 0, 00:15:49.087 "data_size": 7936 00:15:49.087 }, 00:15:49.087 { 00:15:49.087 "name": "BaseBdev2", 00:15:49.087 "uuid": "51bc37ca-c7ee-5b51-8c67-5ddfa1baa2a1", 00:15:49.087 "is_configured": true, 00:15:49.087 "data_offset": 256, 00:15:49.087 "data_size": 7936 00:15:49.087 } 00:15:49.087 ] 00:15:49.087 }' 00:15:49.087 18:46:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:49.087 18:46:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:49.347 18:46:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:49.347 18:46:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:49.347 18:46:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:49.347 18:46:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:49.347 18:46:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:49.347 18:46:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.347 18:46:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:49.347 18:46:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.347 18:46:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:49.347 18:46:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.347 18:46:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:49.347 "name": "raid_bdev1", 00:15:49.347 "uuid": "b6bfefbb-96c1-4e19-8853-f75912959668", 00:15:49.347 "strip_size_kb": 0, 00:15:49.347 "state": "online", 00:15:49.347 "raid_level": "raid1", 00:15:49.347 "superblock": true, 00:15:49.347 "num_base_bdevs": 2, 00:15:49.347 "num_base_bdevs_discovered": 1, 00:15:49.347 "num_base_bdevs_operational": 1, 00:15:49.347 "base_bdevs_list": [ 00:15:49.347 { 00:15:49.347 "name": null, 00:15:49.347 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:49.347 "is_configured": false, 00:15:49.347 "data_offset": 0, 00:15:49.347 "data_size": 7936 00:15:49.348 }, 00:15:49.348 { 00:15:49.348 "name": "BaseBdev2", 00:15:49.348 "uuid": "51bc37ca-c7ee-5b51-8c67-5ddfa1baa2a1", 00:15:49.348 "is_configured": true, 00:15:49.348 "data_offset": 256, 00:15:49.348 "data_size": 7936 00:15:49.348 } 00:15:49.348 ] 00:15:49.348 }' 00:15:49.348 18:46:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:49.348 18:46:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:49.348 18:46:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:49.348 18:46:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:49.348 18:46:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # killprocess 100015 00:15:49.348 18:46:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 100015 ']' 00:15:49.348 18:46:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 100015 00:15:49.348 18:46:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:15:49.348 18:46:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:49.348 18:46:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 100015 00:15:49.348 killing process with pid 100015 00:15:49.348 Received shutdown signal, test time was about 60.000000 seconds 00:15:49.348 00:15:49.348 Latency(us) 00:15:49.348 [2024-12-15T18:46:49.789Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:49.348 [2024-12-15T18:46:49.789Z] =================================================================================================================== 00:15:49.348 [2024-12-15T18:46:49.789Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:49.348 18:46:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:49.348 18:46:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:49.348 18:46:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 100015' 00:15:49.348 18:46:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 100015 00:15:49.348 [2024-12-15 18:46:49.778953] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:49.348 [2024-12-15 18:46:49.779064] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:49.348 [2024-12-15 18:46:49.779111] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:49.348 [2024-12-15 18:46:49.779120] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state offline 00:15:49.348 18:46:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 100015 00:15:49.608 [2024-12-15 18:46:49.813524] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:49.608 18:46:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@786 -- # return 0 00:15:49.608 ************************************ 00:15:49.608 END TEST raid_rebuild_test_sb_md_separate 00:15:49.608 ************************************ 00:15:49.608 00:15:49.608 real 0m18.386s 00:15:49.608 user 0m24.396s 00:15:49.608 sys 0m2.699s 00:15:49.608 18:46:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:49.608 18:46:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:49.868 18:46:50 bdev_raid -- bdev/bdev_raid.sh@1010 -- # base_malloc_params='-m 32 -i' 00:15:49.868 18:46:50 bdev_raid -- bdev/bdev_raid.sh@1011 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:15:49.868 18:46:50 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:49.868 18:46:50 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:49.868 18:46:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:49.868 ************************************ 00:15:49.868 START TEST raid_state_function_test_sb_md_interleaved 00:15:49.868 ************************************ 00:15:49.868 18:46:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:15:49.868 18:46:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:15:49.868 18:46:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:15:49.868 18:46:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:15:49.868 18:46:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:49.868 18:46:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:49.868 18:46:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:49.868 18:46:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:49.868 18:46:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:49.868 18:46:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:49.868 18:46:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:49.868 18:46:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:49.868 18:46:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:49.868 18:46:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:49.868 18:46:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:49.868 18:46:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:49.868 18:46:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:49.868 18:46:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:49.868 18:46:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:49.868 18:46:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:15:49.868 18:46:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:15:49.868 18:46:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:15:49.868 18:46:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:15:49.868 18:46:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@229 -- # raid_pid=100702 00:15:49.868 18:46:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:49.868 Process raid pid: 100702 00:15:49.868 18:46:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 100702' 00:15:49.868 18:46:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@231 -- # waitforlisten 100702 00:15:49.869 18:46:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 100702 ']' 00:15:49.869 18:46:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:49.869 18:46:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:49.869 18:46:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:49.869 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:49.869 18:46:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:49.869 18:46:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:49.869 [2024-12-15 18:46:50.199899] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:15:49.869 [2024-12-15 18:46:50.200169] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:50.128 [2024-12-15 18:46:50.373947] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:50.128 [2024-12-15 18:46:50.400709] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:50.128 [2024-12-15 18:46:50.443904] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:50.128 [2024-12-15 18:46:50.444018] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:50.698 18:46:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:50.698 18:46:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:15:50.698 18:46:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:15:50.698 18:46:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.698 18:46:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:50.698 [2024-12-15 18:46:51.018673] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:50.698 [2024-12-15 18:46:51.018818] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:50.698 [2024-12-15 18:46:51.018851] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:50.698 [2024-12-15 18:46:51.018876] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:50.698 18:46:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.698 18:46:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:50.698 18:46:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:50.698 18:46:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:50.698 18:46:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:50.698 18:46:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:50.698 18:46:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:50.698 18:46:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:50.698 18:46:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:50.698 18:46:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:50.698 18:46:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:50.698 18:46:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.698 18:46:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.698 18:46:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:50.698 18:46:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:50.698 18:46:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.698 18:46:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:50.698 "name": "Existed_Raid", 00:15:50.698 "uuid": "b08c88f7-9b6d-47a8-b652-50d21d30e530", 00:15:50.698 "strip_size_kb": 0, 00:15:50.698 "state": "configuring", 00:15:50.698 "raid_level": "raid1", 00:15:50.698 "superblock": true, 00:15:50.698 "num_base_bdevs": 2, 00:15:50.698 "num_base_bdevs_discovered": 0, 00:15:50.698 "num_base_bdevs_operational": 2, 00:15:50.698 "base_bdevs_list": [ 00:15:50.698 { 00:15:50.698 "name": "BaseBdev1", 00:15:50.698 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:50.698 "is_configured": false, 00:15:50.698 "data_offset": 0, 00:15:50.698 "data_size": 0 00:15:50.698 }, 00:15:50.698 { 00:15:50.698 "name": "BaseBdev2", 00:15:50.698 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:50.698 "is_configured": false, 00:15:50.698 "data_offset": 0, 00:15:50.698 "data_size": 0 00:15:50.698 } 00:15:50.698 ] 00:15:50.698 }' 00:15:50.698 18:46:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:50.698 18:46:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:51.268 18:46:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:51.268 18:46:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.268 18:46:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:51.268 [2024-12-15 18:46:51.453862] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:51.268 [2024-12-15 18:46:51.453971] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:15:51.268 18:46:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.268 18:46:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:15:51.268 18:46:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.268 18:46:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:51.268 [2024-12-15 18:46:51.465860] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:51.268 [2024-12-15 18:46:51.465945] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:51.268 [2024-12-15 18:46:51.465972] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:51.268 [2024-12-15 18:46:51.465994] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:51.268 18:46:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.268 18:46:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:15:51.268 18:46:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.268 18:46:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:51.269 [2024-12-15 18:46:51.486868] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:51.269 BaseBdev1 00:15:51.269 18:46:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.269 18:46:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:51.269 18:46:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:51.269 18:46:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:51.269 18:46:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:15:51.269 18:46:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:51.269 18:46:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:51.269 18:46:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:51.269 18:46:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.269 18:46:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:51.269 18:46:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.269 18:46:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:51.269 18:46:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.269 18:46:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:51.269 [ 00:15:51.269 { 00:15:51.269 "name": "BaseBdev1", 00:15:51.269 "aliases": [ 00:15:51.269 "03943721-e6bf-4fd6-9300-35ccf76c7118" 00:15:51.269 ], 00:15:51.269 "product_name": "Malloc disk", 00:15:51.269 "block_size": 4128, 00:15:51.269 "num_blocks": 8192, 00:15:51.269 "uuid": "03943721-e6bf-4fd6-9300-35ccf76c7118", 00:15:51.269 "md_size": 32, 00:15:51.269 "md_interleave": true, 00:15:51.269 "dif_type": 0, 00:15:51.269 "assigned_rate_limits": { 00:15:51.269 "rw_ios_per_sec": 0, 00:15:51.269 "rw_mbytes_per_sec": 0, 00:15:51.269 "r_mbytes_per_sec": 0, 00:15:51.269 "w_mbytes_per_sec": 0 00:15:51.269 }, 00:15:51.269 "claimed": true, 00:15:51.269 "claim_type": "exclusive_write", 00:15:51.269 "zoned": false, 00:15:51.269 "supported_io_types": { 00:15:51.269 "read": true, 00:15:51.269 "write": true, 00:15:51.269 "unmap": true, 00:15:51.269 "flush": true, 00:15:51.269 "reset": true, 00:15:51.269 "nvme_admin": false, 00:15:51.269 "nvme_io": false, 00:15:51.269 "nvme_io_md": false, 00:15:51.269 "write_zeroes": true, 00:15:51.269 "zcopy": true, 00:15:51.269 "get_zone_info": false, 00:15:51.269 "zone_management": false, 00:15:51.269 "zone_append": false, 00:15:51.269 "compare": false, 00:15:51.269 "compare_and_write": false, 00:15:51.269 "abort": true, 00:15:51.269 "seek_hole": false, 00:15:51.269 "seek_data": false, 00:15:51.269 "copy": true, 00:15:51.269 "nvme_iov_md": false 00:15:51.269 }, 00:15:51.269 "memory_domains": [ 00:15:51.269 { 00:15:51.269 "dma_device_id": "system", 00:15:51.269 "dma_device_type": 1 00:15:51.269 }, 00:15:51.269 { 00:15:51.269 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:51.269 "dma_device_type": 2 00:15:51.269 } 00:15:51.269 ], 00:15:51.269 "driver_specific": {} 00:15:51.269 } 00:15:51.269 ] 00:15:51.269 18:46:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.269 18:46:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:15:51.269 18:46:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:51.269 18:46:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:51.269 18:46:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:51.269 18:46:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:51.269 18:46:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:51.269 18:46:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:51.269 18:46:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:51.269 18:46:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:51.269 18:46:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:51.269 18:46:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:51.269 18:46:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.269 18:46:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.269 18:46:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:51.269 18:46:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:51.269 18:46:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.269 18:46:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:51.269 "name": "Existed_Raid", 00:15:51.269 "uuid": "4793b95b-a8a2-458b-8584-ae99b3df4b47", 00:15:51.269 "strip_size_kb": 0, 00:15:51.269 "state": "configuring", 00:15:51.269 "raid_level": "raid1", 00:15:51.269 "superblock": true, 00:15:51.269 "num_base_bdevs": 2, 00:15:51.269 "num_base_bdevs_discovered": 1, 00:15:51.269 "num_base_bdevs_operational": 2, 00:15:51.269 "base_bdevs_list": [ 00:15:51.269 { 00:15:51.269 "name": "BaseBdev1", 00:15:51.269 "uuid": "03943721-e6bf-4fd6-9300-35ccf76c7118", 00:15:51.269 "is_configured": true, 00:15:51.269 "data_offset": 256, 00:15:51.269 "data_size": 7936 00:15:51.269 }, 00:15:51.269 { 00:15:51.269 "name": "BaseBdev2", 00:15:51.269 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:51.269 "is_configured": false, 00:15:51.269 "data_offset": 0, 00:15:51.269 "data_size": 0 00:15:51.269 } 00:15:51.269 ] 00:15:51.269 }' 00:15:51.269 18:46:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:51.269 18:46:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:51.839 18:46:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:51.839 18:46:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.839 18:46:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:51.839 [2024-12-15 18:46:52.002087] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:51.839 [2024-12-15 18:46:52.002190] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:15:51.839 18:46:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.839 18:46:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:15:51.839 18:46:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.839 18:46:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:51.839 [2024-12-15 18:46:52.014101] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:51.839 [2024-12-15 18:46:52.015814] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:51.839 [2024-12-15 18:46:52.015853] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:51.839 18:46:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.839 18:46:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:51.839 18:46:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:51.839 18:46:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:51.839 18:46:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:51.839 18:46:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:51.839 18:46:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:51.839 18:46:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:51.839 18:46:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:51.839 18:46:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:51.839 18:46:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:51.839 18:46:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:51.839 18:46:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:51.839 18:46:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.840 18:46:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.840 18:46:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:51.840 18:46:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:51.840 18:46:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.840 18:46:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:51.840 "name": "Existed_Raid", 00:15:51.840 "uuid": "2a1129f0-085a-45a9-ba27-800fdb03aee2", 00:15:51.840 "strip_size_kb": 0, 00:15:51.840 "state": "configuring", 00:15:51.840 "raid_level": "raid1", 00:15:51.840 "superblock": true, 00:15:51.840 "num_base_bdevs": 2, 00:15:51.840 "num_base_bdevs_discovered": 1, 00:15:51.840 "num_base_bdevs_operational": 2, 00:15:51.840 "base_bdevs_list": [ 00:15:51.840 { 00:15:51.840 "name": "BaseBdev1", 00:15:51.840 "uuid": "03943721-e6bf-4fd6-9300-35ccf76c7118", 00:15:51.840 "is_configured": true, 00:15:51.840 "data_offset": 256, 00:15:51.840 "data_size": 7936 00:15:51.840 }, 00:15:51.840 { 00:15:51.840 "name": "BaseBdev2", 00:15:51.840 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:51.840 "is_configured": false, 00:15:51.840 "data_offset": 0, 00:15:51.840 "data_size": 0 00:15:51.840 } 00:15:51.840 ] 00:15:51.840 }' 00:15:51.840 18:46:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:51.840 18:46:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:52.100 18:46:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:15:52.100 18:46:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.100 18:46:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:52.100 [2024-12-15 18:46:52.496437] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:52.100 [2024-12-15 18:46:52.496696] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:15:52.100 [2024-12-15 18:46:52.496739] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:15:52.100 [2024-12-15 18:46:52.496908] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:15:52.100 [2024-12-15 18:46:52.497030] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:15:52.100 [2024-12-15 18:46:52.497077] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raBaseBdev2 00:15:52.100 id_bdev 0x617000006980 00:15:52.100 [2024-12-15 18:46:52.497195] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:52.100 18:46:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.100 18:46:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:52.100 18:46:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:52.100 18:46:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:52.100 18:46:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:15:52.100 18:46:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:52.100 18:46:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:52.100 18:46:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:52.100 18:46:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.100 18:46:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:52.100 18:46:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.100 18:46:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:52.100 18:46:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.100 18:46:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:52.100 [ 00:15:52.100 { 00:15:52.100 "name": "BaseBdev2", 00:15:52.100 "aliases": [ 00:15:52.100 "13c3f3c6-eb9a-44ca-aa3a-b4452997fd8a" 00:15:52.100 ], 00:15:52.100 "product_name": "Malloc disk", 00:15:52.100 "block_size": 4128, 00:15:52.100 "num_blocks": 8192, 00:15:52.100 "uuid": "13c3f3c6-eb9a-44ca-aa3a-b4452997fd8a", 00:15:52.100 "md_size": 32, 00:15:52.100 "md_interleave": true, 00:15:52.100 "dif_type": 0, 00:15:52.100 "assigned_rate_limits": { 00:15:52.100 "rw_ios_per_sec": 0, 00:15:52.100 "rw_mbytes_per_sec": 0, 00:15:52.100 "r_mbytes_per_sec": 0, 00:15:52.100 "w_mbytes_per_sec": 0 00:15:52.100 }, 00:15:52.100 "claimed": true, 00:15:52.100 "claim_type": "exclusive_write", 00:15:52.100 "zoned": false, 00:15:52.100 "supported_io_types": { 00:15:52.100 "read": true, 00:15:52.100 "write": true, 00:15:52.100 "unmap": true, 00:15:52.100 "flush": true, 00:15:52.100 "reset": true, 00:15:52.100 "nvme_admin": false, 00:15:52.100 "nvme_io": false, 00:15:52.100 "nvme_io_md": false, 00:15:52.100 "write_zeroes": true, 00:15:52.100 "zcopy": true, 00:15:52.100 "get_zone_info": false, 00:15:52.100 "zone_management": false, 00:15:52.100 "zone_append": false, 00:15:52.100 "compare": false, 00:15:52.100 "compare_and_write": false, 00:15:52.100 "abort": true, 00:15:52.100 "seek_hole": false, 00:15:52.100 "seek_data": false, 00:15:52.100 "copy": true, 00:15:52.100 "nvme_iov_md": false 00:15:52.100 }, 00:15:52.100 "memory_domains": [ 00:15:52.100 { 00:15:52.100 "dma_device_id": "system", 00:15:52.100 "dma_device_type": 1 00:15:52.100 }, 00:15:52.100 { 00:15:52.100 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:52.100 "dma_device_type": 2 00:15:52.100 } 00:15:52.100 ], 00:15:52.100 "driver_specific": {} 00:15:52.100 } 00:15:52.100 ] 00:15:52.100 18:46:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.100 18:46:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:15:52.100 18:46:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:52.100 18:46:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:52.100 18:46:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:15:52.100 18:46:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:52.100 18:46:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:52.100 18:46:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:52.100 18:46:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:52.100 18:46:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:52.100 18:46:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:52.100 18:46:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:52.100 18:46:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:52.100 18:46:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:52.100 18:46:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:52.100 18:46:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:52.100 18:46:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.101 18:46:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:52.360 18:46:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.360 18:46:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:52.360 "name": "Existed_Raid", 00:15:52.360 "uuid": "2a1129f0-085a-45a9-ba27-800fdb03aee2", 00:15:52.360 "strip_size_kb": 0, 00:15:52.360 "state": "online", 00:15:52.360 "raid_level": "raid1", 00:15:52.360 "superblock": true, 00:15:52.360 "num_base_bdevs": 2, 00:15:52.360 "num_base_bdevs_discovered": 2, 00:15:52.360 "num_base_bdevs_operational": 2, 00:15:52.360 "base_bdevs_list": [ 00:15:52.360 { 00:15:52.360 "name": "BaseBdev1", 00:15:52.360 "uuid": "03943721-e6bf-4fd6-9300-35ccf76c7118", 00:15:52.360 "is_configured": true, 00:15:52.360 "data_offset": 256, 00:15:52.360 "data_size": 7936 00:15:52.360 }, 00:15:52.360 { 00:15:52.360 "name": "BaseBdev2", 00:15:52.360 "uuid": "13c3f3c6-eb9a-44ca-aa3a-b4452997fd8a", 00:15:52.360 "is_configured": true, 00:15:52.360 "data_offset": 256, 00:15:52.360 "data_size": 7936 00:15:52.360 } 00:15:52.360 ] 00:15:52.360 }' 00:15:52.360 18:46:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:52.360 18:46:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:52.623 18:46:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:52.623 18:46:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:52.623 18:46:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:52.623 18:46:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:52.623 18:46:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:15:52.623 18:46:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:52.623 18:46:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:52.623 18:46:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:52.623 18:46:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.623 18:46:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:52.623 [2024-12-15 18:46:52.951946] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:52.623 18:46:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.623 18:46:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:52.623 "name": "Existed_Raid", 00:15:52.623 "aliases": [ 00:15:52.623 "2a1129f0-085a-45a9-ba27-800fdb03aee2" 00:15:52.623 ], 00:15:52.623 "product_name": "Raid Volume", 00:15:52.623 "block_size": 4128, 00:15:52.623 "num_blocks": 7936, 00:15:52.623 "uuid": "2a1129f0-085a-45a9-ba27-800fdb03aee2", 00:15:52.623 "md_size": 32, 00:15:52.623 "md_interleave": true, 00:15:52.623 "dif_type": 0, 00:15:52.623 "assigned_rate_limits": { 00:15:52.623 "rw_ios_per_sec": 0, 00:15:52.623 "rw_mbytes_per_sec": 0, 00:15:52.623 "r_mbytes_per_sec": 0, 00:15:52.623 "w_mbytes_per_sec": 0 00:15:52.623 }, 00:15:52.623 "claimed": false, 00:15:52.623 "zoned": false, 00:15:52.623 "supported_io_types": { 00:15:52.623 "read": true, 00:15:52.623 "write": true, 00:15:52.623 "unmap": false, 00:15:52.623 "flush": false, 00:15:52.623 "reset": true, 00:15:52.623 "nvme_admin": false, 00:15:52.623 "nvme_io": false, 00:15:52.623 "nvme_io_md": false, 00:15:52.623 "write_zeroes": true, 00:15:52.623 "zcopy": false, 00:15:52.623 "get_zone_info": false, 00:15:52.623 "zone_management": false, 00:15:52.623 "zone_append": false, 00:15:52.623 "compare": false, 00:15:52.623 "compare_and_write": false, 00:15:52.623 "abort": false, 00:15:52.623 "seek_hole": false, 00:15:52.623 "seek_data": false, 00:15:52.623 "copy": false, 00:15:52.623 "nvme_iov_md": false 00:15:52.623 }, 00:15:52.623 "memory_domains": [ 00:15:52.623 { 00:15:52.623 "dma_device_id": "system", 00:15:52.623 "dma_device_type": 1 00:15:52.623 }, 00:15:52.623 { 00:15:52.623 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:52.623 "dma_device_type": 2 00:15:52.623 }, 00:15:52.623 { 00:15:52.623 "dma_device_id": "system", 00:15:52.623 "dma_device_type": 1 00:15:52.623 }, 00:15:52.623 { 00:15:52.623 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:52.623 "dma_device_type": 2 00:15:52.623 } 00:15:52.623 ], 00:15:52.623 "driver_specific": { 00:15:52.623 "raid": { 00:15:52.623 "uuid": "2a1129f0-085a-45a9-ba27-800fdb03aee2", 00:15:52.623 "strip_size_kb": 0, 00:15:52.623 "state": "online", 00:15:52.623 "raid_level": "raid1", 00:15:52.623 "superblock": true, 00:15:52.623 "num_base_bdevs": 2, 00:15:52.623 "num_base_bdevs_discovered": 2, 00:15:52.623 "num_base_bdevs_operational": 2, 00:15:52.623 "base_bdevs_list": [ 00:15:52.623 { 00:15:52.623 "name": "BaseBdev1", 00:15:52.623 "uuid": "03943721-e6bf-4fd6-9300-35ccf76c7118", 00:15:52.623 "is_configured": true, 00:15:52.623 "data_offset": 256, 00:15:52.623 "data_size": 7936 00:15:52.623 }, 00:15:52.623 { 00:15:52.623 "name": "BaseBdev2", 00:15:52.623 "uuid": "13c3f3c6-eb9a-44ca-aa3a-b4452997fd8a", 00:15:52.623 "is_configured": true, 00:15:52.623 "data_offset": 256, 00:15:52.623 "data_size": 7936 00:15:52.623 } 00:15:52.623 ] 00:15:52.623 } 00:15:52.623 } 00:15:52.623 }' 00:15:52.623 18:46:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:52.623 18:46:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:52.623 BaseBdev2' 00:15:52.623 18:46:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:52.884 18:46:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:15:52.884 18:46:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:52.884 18:46:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:52.884 18:46:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:52.884 18:46:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.884 18:46:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:52.884 18:46:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.884 18:46:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:15:52.884 18:46:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:15:52.884 18:46:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:52.884 18:46:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:52.884 18:46:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:52.884 18:46:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.884 18:46:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:52.884 18:46:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.884 18:46:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:15:52.884 18:46:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:15:52.884 18:46:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:52.884 18:46:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.884 18:46:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:52.884 [2024-12-15 18:46:53.195326] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:52.884 18:46:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.884 18:46:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:52.884 18:46:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:15:52.884 18:46:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:52.884 18:46:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:15:52.884 18:46:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:15:52.884 18:46:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:15:52.884 18:46:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:52.884 18:46:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:52.884 18:46:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:52.884 18:46:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:52.884 18:46:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:52.884 18:46:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:52.884 18:46:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:52.884 18:46:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:52.884 18:46:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:52.884 18:46:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:52.884 18:46:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:52.884 18:46:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.884 18:46:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:52.884 18:46:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.884 18:46:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:52.884 "name": "Existed_Raid", 00:15:52.884 "uuid": "2a1129f0-085a-45a9-ba27-800fdb03aee2", 00:15:52.884 "strip_size_kb": 0, 00:15:52.884 "state": "online", 00:15:52.884 "raid_level": "raid1", 00:15:52.884 "superblock": true, 00:15:52.884 "num_base_bdevs": 2, 00:15:52.884 "num_base_bdevs_discovered": 1, 00:15:52.884 "num_base_bdevs_operational": 1, 00:15:52.884 "base_bdevs_list": [ 00:15:52.884 { 00:15:52.884 "name": null, 00:15:52.884 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:52.884 "is_configured": false, 00:15:52.884 "data_offset": 0, 00:15:52.884 "data_size": 7936 00:15:52.884 }, 00:15:52.884 { 00:15:52.884 "name": "BaseBdev2", 00:15:52.884 "uuid": "13c3f3c6-eb9a-44ca-aa3a-b4452997fd8a", 00:15:52.884 "is_configured": true, 00:15:52.884 "data_offset": 256, 00:15:52.884 "data_size": 7936 00:15:52.884 } 00:15:52.884 ] 00:15:52.884 }' 00:15:52.884 18:46:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:52.884 18:46:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:53.481 18:46:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:53.481 18:46:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:53.481 18:46:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:53.481 18:46:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:53.481 18:46:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.481 18:46:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:53.481 18:46:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.481 18:46:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:53.481 18:46:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:53.481 18:46:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:53.481 18:46:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.481 18:46:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:53.481 [2024-12-15 18:46:53.730227] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:53.481 [2024-12-15 18:46:53.730379] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:53.481 [2024-12-15 18:46:53.742452] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:53.481 [2024-12-15 18:46:53.742570] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:53.481 [2024-12-15 18:46:53.742609] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:15:53.481 18:46:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.481 18:46:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:53.481 18:46:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:53.481 18:46:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:53.481 18:46:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:53.481 18:46:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.481 18:46:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:53.481 18:46:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.481 18:46:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:53.481 18:46:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:53.481 18:46:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:15:53.481 18:46:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@326 -- # killprocess 100702 00:15:53.481 18:46:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 100702 ']' 00:15:53.481 18:46:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 100702 00:15:53.481 18:46:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:15:53.481 18:46:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:53.481 18:46:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 100702 00:15:53.481 18:46:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:53.481 killing process with pid 100702 00:15:53.481 18:46:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:53.481 18:46:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 100702' 00:15:53.481 18:46:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 100702 00:15:53.481 [2024-12-15 18:46:53.842542] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:53.481 18:46:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 100702 00:15:53.481 [2024-12-15 18:46:53.843473] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:53.741 18:46:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@328 -- # return 0 00:15:53.741 00:15:53.741 real 0m3.966s 00:15:53.741 user 0m6.251s 00:15:53.741 sys 0m0.864s 00:15:53.741 18:46:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:53.741 ************************************ 00:15:53.741 END TEST raid_state_function_test_sb_md_interleaved 00:15:53.741 ************************************ 00:15:53.741 18:46:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:53.741 18:46:54 bdev_raid -- bdev/bdev_raid.sh@1012 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:15:53.741 18:46:54 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:15:53.742 18:46:54 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:53.742 18:46:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:53.742 ************************************ 00:15:53.742 START TEST raid_superblock_test_md_interleaved 00:15:53.742 ************************************ 00:15:53.742 18:46:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:15:53.742 18:46:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:15:53.742 18:46:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:15:53.742 18:46:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:15:53.742 18:46:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:15:53.742 18:46:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:15:53.742 18:46:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:15:53.742 18:46:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:15:53.742 18:46:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:15:53.742 18:46:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:15:53.742 18:46:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size 00:15:53.742 18:46:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:15:53.742 18:46:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:15:53.742 18:46:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:15:53.742 18:46:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:15:53.742 18:46:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:15:53.742 18:46:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # raid_pid=100940 00:15:53.742 18:46:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:15:53.742 18:46:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@413 -- # waitforlisten 100940 00:15:53.742 18:46:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 100940 ']' 00:15:53.742 18:46:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:53.742 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:53.742 18:46:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:53.742 18:46:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:53.742 18:46:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:53.742 18:46:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:54.001 [2024-12-15 18:46:54.250571] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:15:54.001 [2024-12-15 18:46:54.250878] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100940 ] 00:15:54.001 [2024-12-15 18:46:54.426886] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:54.261 [2024-12-15 18:46:54.455558] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:54.261 [2024-12-15 18:46:54.499783] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:54.261 [2024-12-15 18:46:54.499903] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:54.835 18:46:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:54.835 18:46:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:15:54.835 18:46:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:15:54.835 18:46:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:54.835 18:46:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:15:54.835 18:46:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:15:54.835 18:46:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:54.835 18:46:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:54.835 18:46:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:54.835 18:46:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:54.835 18:46:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:15:54.835 18:46:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.835 18:46:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:54.835 malloc1 00:15:54.835 18:46:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.835 18:46:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:54.835 18:46:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.835 18:46:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:54.835 [2024-12-15 18:46:55.131506] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:54.835 [2024-12-15 18:46:55.131662] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:54.835 [2024-12-15 18:46:55.131703] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:54.835 [2024-12-15 18:46:55.131733] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:54.835 [2024-12-15 18:46:55.133589] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:54.835 [2024-12-15 18:46:55.133666] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:54.835 pt1 00:15:54.835 18:46:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.835 18:46:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:54.835 18:46:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:54.835 18:46:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:15:54.835 18:46:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:15:54.835 18:46:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:54.835 18:46:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:54.835 18:46:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:54.835 18:46:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:54.835 18:46:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:15:54.835 18:46:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.835 18:46:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:54.835 malloc2 00:15:54.835 18:46:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.835 18:46:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:54.835 18:46:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.835 18:46:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:54.835 [2024-12-15 18:46:55.160175] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:54.835 [2024-12-15 18:46:55.160283] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:54.835 [2024-12-15 18:46:55.160318] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:54.835 [2024-12-15 18:46:55.160347] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:54.835 [2024-12-15 18:46:55.162169] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:54.835 [2024-12-15 18:46:55.162244] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:54.835 pt2 00:15:54.835 18:46:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.835 18:46:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:54.835 18:46:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:54.835 18:46:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:15:54.835 18:46:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.835 18:46:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:54.836 [2024-12-15 18:46:55.172200] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:54.836 [2024-12-15 18:46:55.174023] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:54.836 [2024-12-15 18:46:55.174211] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:15:54.836 [2024-12-15 18:46:55.174260] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:15:54.836 [2024-12-15 18:46:55.174358] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:15:54.836 [2024-12-15 18:46:55.174460] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:15:54.836 [2024-12-15 18:46:55.174502] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:15:54.836 [2024-12-15 18:46:55.174607] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:54.836 18:46:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.836 18:46:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:54.836 18:46:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:54.836 18:46:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:54.836 18:46:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:54.836 18:46:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:54.836 18:46:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:54.836 18:46:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:54.836 18:46:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:54.836 18:46:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:54.836 18:46:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:54.836 18:46:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.836 18:46:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.836 18:46:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:54.836 18:46:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:54.836 18:46:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.836 18:46:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:54.836 "name": "raid_bdev1", 00:15:54.836 "uuid": "c2a34784-9a0a-4913-a5fb-c65369302f68", 00:15:54.836 "strip_size_kb": 0, 00:15:54.836 "state": "online", 00:15:54.836 "raid_level": "raid1", 00:15:54.836 "superblock": true, 00:15:54.836 "num_base_bdevs": 2, 00:15:54.836 "num_base_bdevs_discovered": 2, 00:15:54.836 "num_base_bdevs_operational": 2, 00:15:54.836 "base_bdevs_list": [ 00:15:54.836 { 00:15:54.836 "name": "pt1", 00:15:54.836 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:54.836 "is_configured": true, 00:15:54.836 "data_offset": 256, 00:15:54.836 "data_size": 7936 00:15:54.836 }, 00:15:54.836 { 00:15:54.836 "name": "pt2", 00:15:54.836 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:54.836 "is_configured": true, 00:15:54.836 "data_offset": 256, 00:15:54.836 "data_size": 7936 00:15:54.836 } 00:15:54.836 ] 00:15:54.836 }' 00:15:54.836 18:46:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:54.836 18:46:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:55.403 18:46:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:15:55.403 18:46:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:55.403 18:46:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:55.403 18:46:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:55.403 18:46:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:15:55.403 18:46:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:55.403 18:46:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:55.403 18:46:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.403 18:46:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:55.403 18:46:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:55.403 [2024-12-15 18:46:55.643638] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:55.403 18:46:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.403 18:46:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:55.403 "name": "raid_bdev1", 00:15:55.403 "aliases": [ 00:15:55.403 "c2a34784-9a0a-4913-a5fb-c65369302f68" 00:15:55.403 ], 00:15:55.403 "product_name": "Raid Volume", 00:15:55.403 "block_size": 4128, 00:15:55.403 "num_blocks": 7936, 00:15:55.403 "uuid": "c2a34784-9a0a-4913-a5fb-c65369302f68", 00:15:55.403 "md_size": 32, 00:15:55.403 "md_interleave": true, 00:15:55.403 "dif_type": 0, 00:15:55.403 "assigned_rate_limits": { 00:15:55.403 "rw_ios_per_sec": 0, 00:15:55.403 "rw_mbytes_per_sec": 0, 00:15:55.403 "r_mbytes_per_sec": 0, 00:15:55.403 "w_mbytes_per_sec": 0 00:15:55.403 }, 00:15:55.403 "claimed": false, 00:15:55.403 "zoned": false, 00:15:55.403 "supported_io_types": { 00:15:55.403 "read": true, 00:15:55.403 "write": true, 00:15:55.403 "unmap": false, 00:15:55.403 "flush": false, 00:15:55.403 "reset": true, 00:15:55.403 "nvme_admin": false, 00:15:55.403 "nvme_io": false, 00:15:55.403 "nvme_io_md": false, 00:15:55.403 "write_zeroes": true, 00:15:55.403 "zcopy": false, 00:15:55.403 "get_zone_info": false, 00:15:55.403 "zone_management": false, 00:15:55.403 "zone_append": false, 00:15:55.403 "compare": false, 00:15:55.403 "compare_and_write": false, 00:15:55.403 "abort": false, 00:15:55.403 "seek_hole": false, 00:15:55.403 "seek_data": false, 00:15:55.403 "copy": false, 00:15:55.403 "nvme_iov_md": false 00:15:55.403 }, 00:15:55.403 "memory_domains": [ 00:15:55.403 { 00:15:55.403 "dma_device_id": "system", 00:15:55.403 "dma_device_type": 1 00:15:55.403 }, 00:15:55.403 { 00:15:55.403 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:55.403 "dma_device_type": 2 00:15:55.403 }, 00:15:55.403 { 00:15:55.403 "dma_device_id": "system", 00:15:55.403 "dma_device_type": 1 00:15:55.403 }, 00:15:55.403 { 00:15:55.403 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:55.403 "dma_device_type": 2 00:15:55.403 } 00:15:55.403 ], 00:15:55.403 "driver_specific": { 00:15:55.403 "raid": { 00:15:55.403 "uuid": "c2a34784-9a0a-4913-a5fb-c65369302f68", 00:15:55.403 "strip_size_kb": 0, 00:15:55.403 "state": "online", 00:15:55.403 "raid_level": "raid1", 00:15:55.403 "superblock": true, 00:15:55.403 "num_base_bdevs": 2, 00:15:55.403 "num_base_bdevs_discovered": 2, 00:15:55.403 "num_base_bdevs_operational": 2, 00:15:55.403 "base_bdevs_list": [ 00:15:55.403 { 00:15:55.403 "name": "pt1", 00:15:55.403 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:55.403 "is_configured": true, 00:15:55.403 "data_offset": 256, 00:15:55.403 "data_size": 7936 00:15:55.403 }, 00:15:55.403 { 00:15:55.403 "name": "pt2", 00:15:55.403 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:55.403 "is_configured": true, 00:15:55.403 "data_offset": 256, 00:15:55.403 "data_size": 7936 00:15:55.403 } 00:15:55.403 ] 00:15:55.403 } 00:15:55.403 } 00:15:55.403 }' 00:15:55.403 18:46:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:55.403 18:46:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:55.403 pt2' 00:15:55.403 18:46:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:55.403 18:46:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:15:55.403 18:46:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:55.403 18:46:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:55.403 18:46:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.403 18:46:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:55.403 18:46:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:55.403 18:46:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.403 18:46:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:15:55.403 18:46:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:15:55.403 18:46:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:55.403 18:46:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:55.405 18:46:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.405 18:46:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:55.405 18:46:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:55.665 18:46:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.665 18:46:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:15:55.665 18:46:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:15:55.665 18:46:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:55.665 18:46:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.665 18:46:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:55.665 18:46:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:15:55.665 [2024-12-15 18:46:55.883204] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:55.665 18:46:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.665 18:46:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=c2a34784-9a0a-4913-a5fb-c65369302f68 00:15:55.665 18:46:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # '[' -z c2a34784-9a0a-4913-a5fb-c65369302f68 ']' 00:15:55.665 18:46:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:55.665 18:46:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.665 18:46:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:55.665 [2024-12-15 18:46:55.910890] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:55.665 [2024-12-15 18:46:55.910917] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:55.665 [2024-12-15 18:46:55.910987] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:55.665 [2024-12-15 18:46:55.911044] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:55.665 [2024-12-15 18:46:55.911053] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:15:55.665 18:46:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.665 18:46:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.665 18:46:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.665 18:46:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:15:55.665 18:46:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:55.665 18:46:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.665 18:46:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:15:55.665 18:46:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:15:55.665 18:46:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:55.665 18:46:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:15:55.665 18:46:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.665 18:46:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:55.665 18:46:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.665 18:46:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:55.665 18:46:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:15:55.665 18:46:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.665 18:46:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:55.665 18:46:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.665 18:46:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:15:55.665 18:46:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.665 18:46:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:55.665 18:46:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:55.665 18:46:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.665 18:46:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:15:55.665 18:46:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:15:55.665 18:46:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:15:55.665 18:46:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:15:55.665 18:46:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:55.665 18:46:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:55.665 18:46:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:55.665 18:46:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:55.665 18:46:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:15:55.665 18:46:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.665 18:46:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:55.665 [2024-12-15 18:46:56.050676] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:55.665 [2024-12-15 18:46:56.052582] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:55.665 [2024-12-15 18:46:56.052678] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:15:55.665 [2024-12-15 18:46:56.052729] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:15:55.665 [2024-12-15 18:46:56.052744] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:55.665 [2024-12-15 18:46:56.052752] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:15:55.665 request: 00:15:55.665 { 00:15:55.665 "name": "raid_bdev1", 00:15:55.665 "raid_level": "raid1", 00:15:55.665 "base_bdevs": [ 00:15:55.665 "malloc1", 00:15:55.665 "malloc2" 00:15:55.665 ], 00:15:55.665 "superblock": false, 00:15:55.665 "method": "bdev_raid_create", 00:15:55.665 "req_id": 1 00:15:55.665 } 00:15:55.665 Got JSON-RPC error response 00:15:55.665 response: 00:15:55.665 { 00:15:55.665 "code": -17, 00:15:55.665 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:55.665 } 00:15:55.665 18:46:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:55.665 18:46:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:15:55.665 18:46:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:55.665 18:46:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:55.665 18:46:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:55.665 18:46:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.665 18:46:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:15:55.665 18:46:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.665 18:46:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:55.665 18:46:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.924 18:46:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:15:55.924 18:46:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:15:55.924 18:46:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:55.924 18:46:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.924 18:46:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:55.924 [2024-12-15 18:46:56.118531] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:55.924 [2024-12-15 18:46:56.118621] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:55.924 [2024-12-15 18:46:56.118655] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:55.924 [2024-12-15 18:46:56.118682] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:55.924 [2024-12-15 18:46:56.120603] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:55.924 [2024-12-15 18:46:56.120671] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:55.924 [2024-12-15 18:46:56.120728] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:55.924 [2024-12-15 18:46:56.120776] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:55.924 pt1 00:15:55.924 18:46:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.924 18:46:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:15:55.924 18:46:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:55.924 18:46:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:55.924 18:46:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:55.924 18:46:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:55.924 18:46:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:55.924 18:46:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:55.924 18:46:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:55.924 18:46:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:55.924 18:46:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:55.924 18:46:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.924 18:46:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.924 18:46:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:55.924 18:46:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:55.924 18:46:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.924 18:46:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:55.924 "name": "raid_bdev1", 00:15:55.924 "uuid": "c2a34784-9a0a-4913-a5fb-c65369302f68", 00:15:55.924 "strip_size_kb": 0, 00:15:55.924 "state": "configuring", 00:15:55.924 "raid_level": "raid1", 00:15:55.924 "superblock": true, 00:15:55.924 "num_base_bdevs": 2, 00:15:55.924 "num_base_bdevs_discovered": 1, 00:15:55.924 "num_base_bdevs_operational": 2, 00:15:55.924 "base_bdevs_list": [ 00:15:55.924 { 00:15:55.924 "name": "pt1", 00:15:55.924 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:55.924 "is_configured": true, 00:15:55.924 "data_offset": 256, 00:15:55.924 "data_size": 7936 00:15:55.924 }, 00:15:55.924 { 00:15:55.924 "name": null, 00:15:55.924 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:55.924 "is_configured": false, 00:15:55.924 "data_offset": 256, 00:15:55.924 "data_size": 7936 00:15:55.924 } 00:15:55.924 ] 00:15:55.924 }' 00:15:55.924 18:46:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:55.924 18:46:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:56.184 18:46:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:15:56.184 18:46:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:15:56.184 18:46:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:56.184 18:46:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:56.184 18:46:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.184 18:46:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:56.184 [2024-12-15 18:46:56.621678] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:56.184 [2024-12-15 18:46:56.621732] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:56.184 [2024-12-15 18:46:56.621754] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:15:56.184 [2024-12-15 18:46:56.621763] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:56.184 [2024-12-15 18:46:56.621879] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:56.184 [2024-12-15 18:46:56.621891] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:56.184 [2024-12-15 18:46:56.621927] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:56.184 [2024-12-15 18:46:56.621942] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:56.184 [2024-12-15 18:46:56.622025] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:15:56.184 [2024-12-15 18:46:56.622035] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:15:56.184 [2024-12-15 18:46:56.622101] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:15:56.184 [2024-12-15 18:46:56.622158] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:15:56.184 [2024-12-15 18:46:56.622173] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:15:56.184 [2024-12-15 18:46:56.622217] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:56.444 pt2 00:15:56.444 18:46:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.444 18:46:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:56.444 18:46:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:56.444 18:46:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:56.444 18:46:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:56.444 18:46:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:56.444 18:46:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:56.444 18:46:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:56.444 18:46:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:56.444 18:46:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:56.444 18:46:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:56.444 18:46:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:56.444 18:46:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:56.444 18:46:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:56.444 18:46:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.444 18:46:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.444 18:46:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:56.444 18:46:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.444 18:46:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:56.444 "name": "raid_bdev1", 00:15:56.444 "uuid": "c2a34784-9a0a-4913-a5fb-c65369302f68", 00:15:56.444 "strip_size_kb": 0, 00:15:56.444 "state": "online", 00:15:56.444 "raid_level": "raid1", 00:15:56.444 "superblock": true, 00:15:56.444 "num_base_bdevs": 2, 00:15:56.444 "num_base_bdevs_discovered": 2, 00:15:56.444 "num_base_bdevs_operational": 2, 00:15:56.444 "base_bdevs_list": [ 00:15:56.444 { 00:15:56.444 "name": "pt1", 00:15:56.444 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:56.444 "is_configured": true, 00:15:56.444 "data_offset": 256, 00:15:56.444 "data_size": 7936 00:15:56.444 }, 00:15:56.444 { 00:15:56.444 "name": "pt2", 00:15:56.444 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:56.444 "is_configured": true, 00:15:56.444 "data_offset": 256, 00:15:56.444 "data_size": 7936 00:15:56.444 } 00:15:56.444 ] 00:15:56.444 }' 00:15:56.444 18:46:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:56.444 18:46:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:56.704 18:46:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:15:56.704 18:46:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:56.704 18:46:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:56.704 18:46:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:56.704 18:46:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:15:56.704 18:46:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:56.704 18:46:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:56.704 18:46:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.704 18:46:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:56.704 18:46:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:56.704 [2024-12-15 18:46:57.061149] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:56.704 18:46:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.704 18:46:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:56.704 "name": "raid_bdev1", 00:15:56.704 "aliases": [ 00:15:56.704 "c2a34784-9a0a-4913-a5fb-c65369302f68" 00:15:56.704 ], 00:15:56.704 "product_name": "Raid Volume", 00:15:56.704 "block_size": 4128, 00:15:56.704 "num_blocks": 7936, 00:15:56.704 "uuid": "c2a34784-9a0a-4913-a5fb-c65369302f68", 00:15:56.704 "md_size": 32, 00:15:56.704 "md_interleave": true, 00:15:56.704 "dif_type": 0, 00:15:56.704 "assigned_rate_limits": { 00:15:56.704 "rw_ios_per_sec": 0, 00:15:56.704 "rw_mbytes_per_sec": 0, 00:15:56.704 "r_mbytes_per_sec": 0, 00:15:56.704 "w_mbytes_per_sec": 0 00:15:56.704 }, 00:15:56.704 "claimed": false, 00:15:56.704 "zoned": false, 00:15:56.704 "supported_io_types": { 00:15:56.704 "read": true, 00:15:56.704 "write": true, 00:15:56.704 "unmap": false, 00:15:56.704 "flush": false, 00:15:56.704 "reset": true, 00:15:56.704 "nvme_admin": false, 00:15:56.704 "nvme_io": false, 00:15:56.704 "nvme_io_md": false, 00:15:56.704 "write_zeroes": true, 00:15:56.704 "zcopy": false, 00:15:56.704 "get_zone_info": false, 00:15:56.704 "zone_management": false, 00:15:56.704 "zone_append": false, 00:15:56.704 "compare": false, 00:15:56.704 "compare_and_write": false, 00:15:56.704 "abort": false, 00:15:56.704 "seek_hole": false, 00:15:56.704 "seek_data": false, 00:15:56.704 "copy": false, 00:15:56.704 "nvme_iov_md": false 00:15:56.704 }, 00:15:56.704 "memory_domains": [ 00:15:56.704 { 00:15:56.704 "dma_device_id": "system", 00:15:56.704 "dma_device_type": 1 00:15:56.704 }, 00:15:56.704 { 00:15:56.704 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:56.704 "dma_device_type": 2 00:15:56.704 }, 00:15:56.704 { 00:15:56.704 "dma_device_id": "system", 00:15:56.704 "dma_device_type": 1 00:15:56.704 }, 00:15:56.704 { 00:15:56.704 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:56.704 "dma_device_type": 2 00:15:56.704 } 00:15:56.704 ], 00:15:56.704 "driver_specific": { 00:15:56.704 "raid": { 00:15:56.704 "uuid": "c2a34784-9a0a-4913-a5fb-c65369302f68", 00:15:56.704 "strip_size_kb": 0, 00:15:56.704 "state": "online", 00:15:56.704 "raid_level": "raid1", 00:15:56.704 "superblock": true, 00:15:56.704 "num_base_bdevs": 2, 00:15:56.704 "num_base_bdevs_discovered": 2, 00:15:56.704 "num_base_bdevs_operational": 2, 00:15:56.704 "base_bdevs_list": [ 00:15:56.704 { 00:15:56.704 "name": "pt1", 00:15:56.704 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:56.704 "is_configured": true, 00:15:56.704 "data_offset": 256, 00:15:56.704 "data_size": 7936 00:15:56.704 }, 00:15:56.704 { 00:15:56.704 "name": "pt2", 00:15:56.704 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:56.704 "is_configured": true, 00:15:56.704 "data_offset": 256, 00:15:56.704 "data_size": 7936 00:15:56.704 } 00:15:56.704 ] 00:15:56.704 } 00:15:56.704 } 00:15:56.704 }' 00:15:56.704 18:46:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:56.964 18:46:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:56.964 pt2' 00:15:56.964 18:46:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:56.964 18:46:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:15:56.964 18:46:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:56.964 18:46:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:56.964 18:46:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:56.964 18:46:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.964 18:46:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:56.964 18:46:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.964 18:46:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:15:56.964 18:46:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:15:56.964 18:46:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:56.964 18:46:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:56.964 18:46:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.964 18:46:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:56.964 18:46:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:56.964 18:46:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.964 18:46:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:15:56.964 18:46:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:15:56.964 18:46:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:56.964 18:46:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.964 18:46:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:56.964 18:46:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:15:56.964 [2024-12-15 18:46:57.292851] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:56.964 18:46:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.964 18:46:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # '[' c2a34784-9a0a-4913-a5fb-c65369302f68 '!=' c2a34784-9a0a-4913-a5fb-c65369302f68 ']' 00:15:56.964 18:46:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:15:56.964 18:46:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:56.964 18:46:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:15:56.964 18:46:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:15:56.964 18:46:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.964 18:46:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:56.964 [2024-12-15 18:46:57.340535] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:15:56.964 18:46:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.964 18:46:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:56.964 18:46:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:56.964 18:46:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:56.964 18:46:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:56.964 18:46:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:56.964 18:46:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:56.964 18:46:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:56.964 18:46:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:56.964 18:46:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:56.964 18:46:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:56.965 18:46:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.965 18:46:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:56.965 18:46:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.965 18:46:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:56.965 18:46:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.965 18:46:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:56.965 "name": "raid_bdev1", 00:15:56.965 "uuid": "c2a34784-9a0a-4913-a5fb-c65369302f68", 00:15:56.965 "strip_size_kb": 0, 00:15:56.965 "state": "online", 00:15:56.965 "raid_level": "raid1", 00:15:56.965 "superblock": true, 00:15:56.965 "num_base_bdevs": 2, 00:15:56.965 "num_base_bdevs_discovered": 1, 00:15:56.965 "num_base_bdevs_operational": 1, 00:15:56.965 "base_bdevs_list": [ 00:15:56.965 { 00:15:56.965 "name": null, 00:15:56.965 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:56.965 "is_configured": false, 00:15:56.965 "data_offset": 0, 00:15:56.965 "data_size": 7936 00:15:56.965 }, 00:15:56.965 { 00:15:56.965 "name": "pt2", 00:15:56.965 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:56.965 "is_configured": true, 00:15:56.965 "data_offset": 256, 00:15:56.965 "data_size": 7936 00:15:56.965 } 00:15:56.965 ] 00:15:56.965 }' 00:15:56.965 18:46:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:56.965 18:46:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:57.534 18:46:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:57.534 18:46:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.534 18:46:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:57.534 [2024-12-15 18:46:57.763760] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:57.534 [2024-12-15 18:46:57.763845] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:57.534 [2024-12-15 18:46:57.763923] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:57.534 [2024-12-15 18:46:57.763983] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:57.534 [2024-12-15 18:46:57.764073] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:15:57.534 18:46:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.534 18:46:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.534 18:46:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.534 18:46:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:15:57.534 18:46:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:57.534 18:46:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.534 18:46:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:15:57.534 18:46:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:15:57.534 18:46:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:15:57.534 18:46:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:57.534 18:46:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:15:57.534 18:46:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.534 18:46:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:57.534 18:46:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.534 18:46:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:57.534 18:46:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:57.534 18:46:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:15:57.534 18:46:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:57.534 18:46:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # i=1 00:15:57.534 18:46:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:57.534 18:46:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.534 18:46:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:57.534 [2024-12-15 18:46:57.839655] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:57.534 [2024-12-15 18:46:57.839708] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:57.534 [2024-12-15 18:46:57.839725] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:15:57.534 [2024-12-15 18:46:57.839733] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:57.534 [2024-12-15 18:46:57.841744] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:57.534 [2024-12-15 18:46:57.841783] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:57.534 [2024-12-15 18:46:57.841843] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:57.534 [2024-12-15 18:46:57.841873] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:57.534 [2024-12-15 18:46:57.841950] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:15:57.534 [2024-12-15 18:46:57.841959] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:15:57.534 [2024-12-15 18:46:57.842050] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:15:57.534 [2024-12-15 18:46:57.842105] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:15:57.534 [2024-12-15 18:46:57.842114] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:15:57.534 [2024-12-15 18:46:57.842165] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:57.534 pt2 00:15:57.534 18:46:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.534 18:46:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:57.535 18:46:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:57.535 18:46:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:57.535 18:46:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:57.535 18:46:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:57.535 18:46:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:57.535 18:46:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:57.535 18:46:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:57.535 18:46:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:57.535 18:46:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:57.535 18:46:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.535 18:46:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.535 18:46:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:57.535 18:46:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:57.535 18:46:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.535 18:46:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:57.535 "name": "raid_bdev1", 00:15:57.535 "uuid": "c2a34784-9a0a-4913-a5fb-c65369302f68", 00:15:57.535 "strip_size_kb": 0, 00:15:57.535 "state": "online", 00:15:57.535 "raid_level": "raid1", 00:15:57.535 "superblock": true, 00:15:57.535 "num_base_bdevs": 2, 00:15:57.535 "num_base_bdevs_discovered": 1, 00:15:57.535 "num_base_bdevs_operational": 1, 00:15:57.535 "base_bdevs_list": [ 00:15:57.535 { 00:15:57.535 "name": null, 00:15:57.535 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:57.535 "is_configured": false, 00:15:57.535 "data_offset": 256, 00:15:57.535 "data_size": 7936 00:15:57.535 }, 00:15:57.535 { 00:15:57.535 "name": "pt2", 00:15:57.535 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:57.535 "is_configured": true, 00:15:57.535 "data_offset": 256, 00:15:57.535 "data_size": 7936 00:15:57.535 } 00:15:57.535 ] 00:15:57.535 }' 00:15:57.535 18:46:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:57.535 18:46:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:58.103 18:46:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:58.103 18:46:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.103 18:46:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:58.103 [2024-12-15 18:46:58.298856] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:58.103 [2024-12-15 18:46:58.298934] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:58.103 [2024-12-15 18:46:58.299002] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:58.103 [2024-12-15 18:46:58.299053] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:58.103 [2024-12-15 18:46:58.299085] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:15:58.103 18:46:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.103 18:46:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.103 18:46:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:15:58.103 18:46:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.103 18:46:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:58.103 18:46:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.103 18:46:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:15:58.103 18:46:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:15:58.103 18:46:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:15:58.103 18:46:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:58.103 18:46:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.103 18:46:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:58.103 [2024-12-15 18:46:58.362732] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:58.103 [2024-12-15 18:46:58.362830] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:58.104 [2024-12-15 18:46:58.362861] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:15:58.104 [2024-12-15 18:46:58.362890] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:58.104 [2024-12-15 18:46:58.364713] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:58.104 [2024-12-15 18:46:58.364793] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:58.104 [2024-12-15 18:46:58.364868] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:58.104 [2024-12-15 18:46:58.364919] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:58.104 [2024-12-15 18:46:58.365010] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:15:58.104 [2024-12-15 18:46:58.365076] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:58.104 [2024-12-15 18:46:58.365094] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state configuring 00:15:58.104 [2024-12-15 18:46:58.365122] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:58.104 [2024-12-15 18:46:58.365181] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:15:58.104 [2024-12-15 18:46:58.365193] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:15:58.104 [2024-12-15 18:46:58.365258] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:58.104 [2024-12-15 18:46:58.365309] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:15:58.104 [2024-12-15 18:46:58.365317] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:15:58.104 [2024-12-15 18:46:58.365374] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:58.104 pt1 00:15:58.104 18:46:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.104 18:46:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:15:58.104 18:46:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:58.104 18:46:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:58.104 18:46:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:58.104 18:46:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:58.104 18:46:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:58.104 18:46:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:58.104 18:46:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:58.104 18:46:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:58.104 18:46:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:58.104 18:46:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:58.104 18:46:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.104 18:46:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.104 18:46:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:58.104 18:46:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:58.104 18:46:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.104 18:46:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:58.104 "name": "raid_bdev1", 00:15:58.104 "uuid": "c2a34784-9a0a-4913-a5fb-c65369302f68", 00:15:58.104 "strip_size_kb": 0, 00:15:58.104 "state": "online", 00:15:58.104 "raid_level": "raid1", 00:15:58.104 "superblock": true, 00:15:58.104 "num_base_bdevs": 2, 00:15:58.104 "num_base_bdevs_discovered": 1, 00:15:58.104 "num_base_bdevs_operational": 1, 00:15:58.104 "base_bdevs_list": [ 00:15:58.104 { 00:15:58.104 "name": null, 00:15:58.104 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:58.104 "is_configured": false, 00:15:58.104 "data_offset": 256, 00:15:58.104 "data_size": 7936 00:15:58.104 }, 00:15:58.104 { 00:15:58.104 "name": "pt2", 00:15:58.104 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:58.104 "is_configured": true, 00:15:58.104 "data_offset": 256, 00:15:58.104 "data_size": 7936 00:15:58.104 } 00:15:58.104 ] 00:15:58.104 }' 00:15:58.104 18:46:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:58.104 18:46:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:58.363 18:46:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:15:58.363 18:46:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:58.363 18:46:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.363 18:46:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:58.363 18:46:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.363 18:46:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:15:58.363 18:46:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:58.363 18:46:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:15:58.363 18:46:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.363 18:46:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:58.623 [2024-12-15 18:46:58.806198] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:58.623 18:46:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.623 18:46:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # '[' c2a34784-9a0a-4913-a5fb-c65369302f68 '!=' c2a34784-9a0a-4913-a5fb-c65369302f68 ']' 00:15:58.623 18:46:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@563 -- # killprocess 100940 00:15:58.623 18:46:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 100940 ']' 00:15:58.623 18:46:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 100940 00:15:58.623 18:46:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:15:58.623 18:46:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:58.623 18:46:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 100940 00:15:58.623 18:46:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:58.623 18:46:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:58.623 18:46:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 100940' 00:15:58.623 killing process with pid 100940 00:15:58.623 18:46:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@973 -- # kill 100940 00:15:58.623 [2024-12-15 18:46:58.889022] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:58.623 [2024-12-15 18:46:58.889086] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:58.623 [2024-12-15 18:46:58.889123] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:58.623 [2024-12-15 18:46:58.889131] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:15:58.623 18:46:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@978 -- # wait 100940 00:15:58.623 [2024-12-15 18:46:58.913222] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:58.884 18:46:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@565 -- # return 0 00:15:58.884 00:15:58.884 real 0m4.988s 00:15:58.884 user 0m8.102s 00:15:58.884 sys 0m1.125s 00:15:58.884 18:46:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:58.884 18:46:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:58.884 ************************************ 00:15:58.884 END TEST raid_superblock_test_md_interleaved 00:15:58.884 ************************************ 00:15:58.884 18:46:59 bdev_raid -- bdev/bdev_raid.sh@1013 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:15:58.884 18:46:59 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:58.884 18:46:59 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:58.884 18:46:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:58.884 ************************************ 00:15:58.884 START TEST raid_rebuild_test_sb_md_interleaved 00:15:58.884 ************************************ 00:15:58.884 18:46:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false false 00:15:58.884 18:46:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:15:58.884 18:46:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:15:58.884 18:46:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:15:58.884 18:46:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:58.884 18:46:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local verify=false 00:15:58.884 18:46:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:58.884 18:46:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:58.884 18:46:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:58.884 18:46:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:58.884 18:46:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:58.884 18:46:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:58.884 18:46:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:58.884 18:46:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:58.884 18:46:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:58.884 18:46:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:58.884 18:46:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:58.884 18:46:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:58.884 18:46:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:58.884 18:46:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:58.884 18:46:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:58.884 18:46:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:15:58.884 18:46:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:15:58.884 18:46:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:15:58.884 18:46:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:15:58.884 18:46:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # raid_pid=101257 00:15:58.884 18:46:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@598 -- # waitforlisten 101257 00:15:58.884 18:46:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:58.884 18:46:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 101257 ']' 00:15:58.884 18:46:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:58.884 18:46:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:58.884 18:46:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:58.884 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:58.884 18:46:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:58.884 18:46:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:58.884 [2024-12-15 18:46:59.320347] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:15:58.884 [2024-12-15 18:46:59.320546] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101257 ] 00:15:58.884 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:58.884 Zero copy mechanism will not be used. 00:15:59.144 [2024-12-15 18:46:59.495532] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:59.144 [2024-12-15 18:46:59.522276] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:59.144 [2024-12-15 18:46:59.565233] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:59.144 [2024-12-15 18:46:59.565358] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:59.713 18:47:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:59.713 18:47:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:15:59.713 18:47:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:59.713 18:47:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:15:59.713 18:47:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.713 18:47:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:59.974 BaseBdev1_malloc 00:15:59.974 18:47:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.974 18:47:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:59.974 18:47:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.974 18:47:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:59.974 [2024-12-15 18:47:00.165382] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:59.974 [2024-12-15 18:47:00.165450] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:59.974 [2024-12-15 18:47:00.165483] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:59.974 [2024-12-15 18:47:00.165499] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:59.974 [2024-12-15 18:47:00.167379] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:59.974 [2024-12-15 18:47:00.167459] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:59.974 BaseBdev1 00:15:59.974 18:47:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.974 18:47:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:59.974 18:47:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:15:59.974 18:47:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.974 18:47:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:59.974 BaseBdev2_malloc 00:15:59.974 18:47:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.974 18:47:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:59.974 18:47:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.974 18:47:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:59.974 [2024-12-15 18:47:00.194143] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:59.974 [2024-12-15 18:47:00.194243] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:59.974 [2024-12-15 18:47:00.194270] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:59.974 [2024-12-15 18:47:00.194278] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:59.974 [2024-12-15 18:47:00.196077] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:59.974 [2024-12-15 18:47:00.196112] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:59.974 BaseBdev2 00:15:59.974 18:47:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.974 18:47:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:15:59.974 18:47:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.974 18:47:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:59.974 spare_malloc 00:15:59.974 18:47:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.974 18:47:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:59.974 18:47:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.974 18:47:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:59.974 spare_delay 00:15:59.974 18:47:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.974 18:47:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:59.974 18:47:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.974 18:47:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:59.974 [2024-12-15 18:47:00.247462] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:59.974 [2024-12-15 18:47:00.247617] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:59.974 [2024-12-15 18:47:00.247656] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:15:59.974 [2024-12-15 18:47:00.247668] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:59.974 [2024-12-15 18:47:00.250249] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:59.974 [2024-12-15 18:47:00.250295] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:59.974 spare 00:15:59.974 18:47:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.974 18:47:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:15:59.974 18:47:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.974 18:47:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:59.974 [2024-12-15 18:47:00.259455] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:59.974 [2024-12-15 18:47:00.261290] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:59.974 [2024-12-15 18:47:00.261519] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:15:59.974 [2024-12-15 18:47:00.261541] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:15:59.974 [2024-12-15 18:47:00.261638] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:15:59.974 [2024-12-15 18:47:00.261703] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:15:59.974 [2024-12-15 18:47:00.261715] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:15:59.974 [2024-12-15 18:47:00.261786] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:59.974 18:47:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.974 18:47:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:59.974 18:47:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:59.974 18:47:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:59.974 18:47:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:59.974 18:47:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:59.974 18:47:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:59.974 18:47:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:59.974 18:47:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:59.974 18:47:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:59.974 18:47:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:59.974 18:47:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.974 18:47:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:59.974 18:47:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.974 18:47:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:59.974 18:47:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.974 18:47:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:59.974 "name": "raid_bdev1", 00:15:59.974 "uuid": "0708fe05-65cc-430a-99b4-82e6170e1a6f", 00:15:59.974 "strip_size_kb": 0, 00:15:59.974 "state": "online", 00:15:59.974 "raid_level": "raid1", 00:15:59.974 "superblock": true, 00:15:59.974 "num_base_bdevs": 2, 00:15:59.974 "num_base_bdevs_discovered": 2, 00:15:59.974 "num_base_bdevs_operational": 2, 00:15:59.974 "base_bdevs_list": [ 00:15:59.974 { 00:15:59.974 "name": "BaseBdev1", 00:15:59.974 "uuid": "97e5110d-21e4-5538-aaf6-d33f4959d066", 00:15:59.974 "is_configured": true, 00:15:59.974 "data_offset": 256, 00:15:59.974 "data_size": 7936 00:15:59.974 }, 00:15:59.974 { 00:15:59.974 "name": "BaseBdev2", 00:15:59.974 "uuid": "b486ecc0-8155-5504-ae4c-3d6fda468b26", 00:15:59.974 "is_configured": true, 00:15:59.974 "data_offset": 256, 00:15:59.974 "data_size": 7936 00:15:59.974 } 00:15:59.974 ] 00:15:59.974 }' 00:15:59.974 18:47:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:59.974 18:47:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:00.544 18:47:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:00.544 18:47:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:00.544 18:47:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.544 18:47:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:00.544 [2024-12-15 18:47:00.698921] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:00.544 18:47:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.544 18:47:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:16:00.544 18:47:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.544 18:47:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:00.544 18:47:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.544 18:47:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:00.544 18:47:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.544 18:47:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:16:00.544 18:47:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:00.544 18:47:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # '[' false = true ']' 00:16:00.544 18:47:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:00.544 18:47:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.544 18:47:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:00.544 [2024-12-15 18:47:00.778525] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:00.544 18:47:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.544 18:47:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:00.544 18:47:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:00.544 18:47:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:00.544 18:47:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:00.544 18:47:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:00.544 18:47:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:00.544 18:47:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:00.544 18:47:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:00.544 18:47:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:00.544 18:47:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:00.544 18:47:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.544 18:47:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:00.544 18:47:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.544 18:47:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:00.544 18:47:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.544 18:47:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:00.544 "name": "raid_bdev1", 00:16:00.544 "uuid": "0708fe05-65cc-430a-99b4-82e6170e1a6f", 00:16:00.544 "strip_size_kb": 0, 00:16:00.544 "state": "online", 00:16:00.544 "raid_level": "raid1", 00:16:00.544 "superblock": true, 00:16:00.544 "num_base_bdevs": 2, 00:16:00.544 "num_base_bdevs_discovered": 1, 00:16:00.544 "num_base_bdevs_operational": 1, 00:16:00.544 "base_bdevs_list": [ 00:16:00.544 { 00:16:00.544 "name": null, 00:16:00.544 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:00.544 "is_configured": false, 00:16:00.544 "data_offset": 0, 00:16:00.544 "data_size": 7936 00:16:00.544 }, 00:16:00.544 { 00:16:00.544 "name": "BaseBdev2", 00:16:00.544 "uuid": "b486ecc0-8155-5504-ae4c-3d6fda468b26", 00:16:00.544 "is_configured": true, 00:16:00.544 "data_offset": 256, 00:16:00.544 "data_size": 7936 00:16:00.544 } 00:16:00.544 ] 00:16:00.544 }' 00:16:00.544 18:47:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:00.544 18:47:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:01.113 18:47:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:01.113 18:47:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.113 18:47:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:01.113 [2024-12-15 18:47:01.269756] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:01.113 [2024-12-15 18:47:01.273472] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:16:01.113 [2024-12-15 18:47:01.275368] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:01.113 18:47:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.113 18:47:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:02.051 18:47:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:02.051 18:47:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:02.051 18:47:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:02.051 18:47:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:02.051 18:47:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:02.051 18:47:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:02.052 18:47:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.052 18:47:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:02.052 18:47:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:02.052 18:47:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.052 18:47:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:02.052 "name": "raid_bdev1", 00:16:02.052 "uuid": "0708fe05-65cc-430a-99b4-82e6170e1a6f", 00:16:02.052 "strip_size_kb": 0, 00:16:02.052 "state": "online", 00:16:02.052 "raid_level": "raid1", 00:16:02.052 "superblock": true, 00:16:02.052 "num_base_bdevs": 2, 00:16:02.052 "num_base_bdevs_discovered": 2, 00:16:02.052 "num_base_bdevs_operational": 2, 00:16:02.052 "process": { 00:16:02.052 "type": "rebuild", 00:16:02.052 "target": "spare", 00:16:02.052 "progress": { 00:16:02.052 "blocks": 2560, 00:16:02.052 "percent": 32 00:16:02.052 } 00:16:02.052 }, 00:16:02.052 "base_bdevs_list": [ 00:16:02.052 { 00:16:02.052 "name": "spare", 00:16:02.052 "uuid": "b2a69f0b-9dc7-5b37-aa85-cae885a83aeb", 00:16:02.052 "is_configured": true, 00:16:02.052 "data_offset": 256, 00:16:02.052 "data_size": 7936 00:16:02.052 }, 00:16:02.052 { 00:16:02.052 "name": "BaseBdev2", 00:16:02.052 "uuid": "b486ecc0-8155-5504-ae4c-3d6fda468b26", 00:16:02.052 "is_configured": true, 00:16:02.052 "data_offset": 256, 00:16:02.052 "data_size": 7936 00:16:02.052 } 00:16:02.052 ] 00:16:02.052 }' 00:16:02.052 18:47:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:02.052 18:47:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:02.052 18:47:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:02.052 18:47:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:02.052 18:47:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:02.052 18:47:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.052 18:47:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:02.052 [2024-12-15 18:47:02.438193] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:02.052 [2024-12-15 18:47:02.480176] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:02.052 [2024-12-15 18:47:02.480228] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:02.052 [2024-12-15 18:47:02.480244] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:02.052 [2024-12-15 18:47:02.480251] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:02.311 18:47:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.311 18:47:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:02.311 18:47:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:02.311 18:47:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:02.311 18:47:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:02.311 18:47:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:02.311 18:47:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:02.311 18:47:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:02.311 18:47:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:02.311 18:47:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:02.311 18:47:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:02.311 18:47:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:02.311 18:47:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:02.311 18:47:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.311 18:47:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:02.311 18:47:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.311 18:47:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:02.311 "name": "raid_bdev1", 00:16:02.311 "uuid": "0708fe05-65cc-430a-99b4-82e6170e1a6f", 00:16:02.311 "strip_size_kb": 0, 00:16:02.312 "state": "online", 00:16:02.312 "raid_level": "raid1", 00:16:02.312 "superblock": true, 00:16:02.312 "num_base_bdevs": 2, 00:16:02.312 "num_base_bdevs_discovered": 1, 00:16:02.312 "num_base_bdevs_operational": 1, 00:16:02.312 "base_bdevs_list": [ 00:16:02.312 { 00:16:02.312 "name": null, 00:16:02.312 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:02.312 "is_configured": false, 00:16:02.312 "data_offset": 0, 00:16:02.312 "data_size": 7936 00:16:02.312 }, 00:16:02.312 { 00:16:02.312 "name": "BaseBdev2", 00:16:02.312 "uuid": "b486ecc0-8155-5504-ae4c-3d6fda468b26", 00:16:02.312 "is_configured": true, 00:16:02.312 "data_offset": 256, 00:16:02.312 "data_size": 7936 00:16:02.312 } 00:16:02.312 ] 00:16:02.312 }' 00:16:02.312 18:47:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:02.312 18:47:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:02.571 18:47:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:02.571 18:47:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:02.571 18:47:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:02.571 18:47:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:02.571 18:47:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:02.571 18:47:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:02.571 18:47:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:02.571 18:47:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.571 18:47:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:02.571 18:47:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.571 18:47:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:02.571 "name": "raid_bdev1", 00:16:02.571 "uuid": "0708fe05-65cc-430a-99b4-82e6170e1a6f", 00:16:02.571 "strip_size_kb": 0, 00:16:02.571 "state": "online", 00:16:02.571 "raid_level": "raid1", 00:16:02.571 "superblock": true, 00:16:02.571 "num_base_bdevs": 2, 00:16:02.571 "num_base_bdevs_discovered": 1, 00:16:02.571 "num_base_bdevs_operational": 1, 00:16:02.571 "base_bdevs_list": [ 00:16:02.571 { 00:16:02.571 "name": null, 00:16:02.571 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:02.571 "is_configured": false, 00:16:02.571 "data_offset": 0, 00:16:02.571 "data_size": 7936 00:16:02.571 }, 00:16:02.571 { 00:16:02.571 "name": "BaseBdev2", 00:16:02.571 "uuid": "b486ecc0-8155-5504-ae4c-3d6fda468b26", 00:16:02.571 "is_configured": true, 00:16:02.571 "data_offset": 256, 00:16:02.571 "data_size": 7936 00:16:02.571 } 00:16:02.571 ] 00:16:02.571 }' 00:16:02.571 18:47:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:02.829 18:47:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:02.829 18:47:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:02.829 18:47:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:02.829 18:47:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:02.829 18:47:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.829 18:47:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:02.829 [2024-12-15 18:47:03.091322] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:02.829 [2024-12-15 18:47:03.094342] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:16:02.829 [2024-12-15 18:47:03.096117] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:02.829 18:47:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.829 18:47:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:03.766 18:47:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:03.766 18:47:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:03.766 18:47:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:03.766 18:47:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:03.766 18:47:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:03.766 18:47:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:03.766 18:47:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.766 18:47:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:03.766 18:47:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:03.766 18:47:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.766 18:47:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:03.766 "name": "raid_bdev1", 00:16:03.766 "uuid": "0708fe05-65cc-430a-99b4-82e6170e1a6f", 00:16:03.766 "strip_size_kb": 0, 00:16:03.766 "state": "online", 00:16:03.766 "raid_level": "raid1", 00:16:03.766 "superblock": true, 00:16:03.766 "num_base_bdevs": 2, 00:16:03.766 "num_base_bdevs_discovered": 2, 00:16:03.766 "num_base_bdevs_operational": 2, 00:16:03.766 "process": { 00:16:03.766 "type": "rebuild", 00:16:03.766 "target": "spare", 00:16:03.766 "progress": { 00:16:03.766 "blocks": 2560, 00:16:03.766 "percent": 32 00:16:03.766 } 00:16:03.766 }, 00:16:03.766 "base_bdevs_list": [ 00:16:03.766 { 00:16:03.766 "name": "spare", 00:16:03.766 "uuid": "b2a69f0b-9dc7-5b37-aa85-cae885a83aeb", 00:16:03.766 "is_configured": true, 00:16:03.766 "data_offset": 256, 00:16:03.766 "data_size": 7936 00:16:03.766 }, 00:16:03.766 { 00:16:03.766 "name": "BaseBdev2", 00:16:03.766 "uuid": "b486ecc0-8155-5504-ae4c-3d6fda468b26", 00:16:03.766 "is_configured": true, 00:16:03.766 "data_offset": 256, 00:16:03.766 "data_size": 7936 00:16:03.766 } 00:16:03.766 ] 00:16:03.766 }' 00:16:03.766 18:47:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:03.766 18:47:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:03.766 18:47:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:04.026 18:47:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:04.026 18:47:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:16:04.026 18:47:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:16:04.026 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:16:04.026 18:47:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:16:04.026 18:47:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:16:04.026 18:47:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:16:04.026 18:47:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # local timeout=621 00:16:04.026 18:47:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:04.026 18:47:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:04.026 18:47:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:04.026 18:47:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:04.026 18:47:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:04.026 18:47:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:04.026 18:47:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:04.026 18:47:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.026 18:47:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.026 18:47:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:04.026 18:47:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.026 18:47:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:04.026 "name": "raid_bdev1", 00:16:04.026 "uuid": "0708fe05-65cc-430a-99b4-82e6170e1a6f", 00:16:04.026 "strip_size_kb": 0, 00:16:04.026 "state": "online", 00:16:04.026 "raid_level": "raid1", 00:16:04.026 "superblock": true, 00:16:04.026 "num_base_bdevs": 2, 00:16:04.026 "num_base_bdevs_discovered": 2, 00:16:04.026 "num_base_bdevs_operational": 2, 00:16:04.026 "process": { 00:16:04.026 "type": "rebuild", 00:16:04.026 "target": "spare", 00:16:04.026 "progress": { 00:16:04.026 "blocks": 2816, 00:16:04.026 "percent": 35 00:16:04.026 } 00:16:04.026 }, 00:16:04.026 "base_bdevs_list": [ 00:16:04.026 { 00:16:04.026 "name": "spare", 00:16:04.026 "uuid": "b2a69f0b-9dc7-5b37-aa85-cae885a83aeb", 00:16:04.026 "is_configured": true, 00:16:04.026 "data_offset": 256, 00:16:04.026 "data_size": 7936 00:16:04.026 }, 00:16:04.026 { 00:16:04.026 "name": "BaseBdev2", 00:16:04.026 "uuid": "b486ecc0-8155-5504-ae4c-3d6fda468b26", 00:16:04.026 "is_configured": true, 00:16:04.026 "data_offset": 256, 00:16:04.026 "data_size": 7936 00:16:04.026 } 00:16:04.026 ] 00:16:04.026 }' 00:16:04.026 18:47:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:04.026 18:47:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:04.026 18:47:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:04.026 18:47:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:04.026 18:47:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:04.963 18:47:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:04.963 18:47:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:04.963 18:47:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:04.963 18:47:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:04.963 18:47:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:04.963 18:47:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:04.963 18:47:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.963 18:47:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:04.963 18:47:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.963 18:47:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:04.963 18:47:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.222 18:47:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:05.222 "name": "raid_bdev1", 00:16:05.222 "uuid": "0708fe05-65cc-430a-99b4-82e6170e1a6f", 00:16:05.222 "strip_size_kb": 0, 00:16:05.222 "state": "online", 00:16:05.222 "raid_level": "raid1", 00:16:05.222 "superblock": true, 00:16:05.222 "num_base_bdevs": 2, 00:16:05.222 "num_base_bdevs_discovered": 2, 00:16:05.222 "num_base_bdevs_operational": 2, 00:16:05.222 "process": { 00:16:05.222 "type": "rebuild", 00:16:05.222 "target": "spare", 00:16:05.222 "progress": { 00:16:05.222 "blocks": 5632, 00:16:05.222 "percent": 70 00:16:05.222 } 00:16:05.222 }, 00:16:05.222 "base_bdevs_list": [ 00:16:05.222 { 00:16:05.222 "name": "spare", 00:16:05.222 "uuid": "b2a69f0b-9dc7-5b37-aa85-cae885a83aeb", 00:16:05.222 "is_configured": true, 00:16:05.222 "data_offset": 256, 00:16:05.222 "data_size": 7936 00:16:05.222 }, 00:16:05.222 { 00:16:05.222 "name": "BaseBdev2", 00:16:05.222 "uuid": "b486ecc0-8155-5504-ae4c-3d6fda468b26", 00:16:05.222 "is_configured": true, 00:16:05.222 "data_offset": 256, 00:16:05.222 "data_size": 7936 00:16:05.222 } 00:16:05.222 ] 00:16:05.222 }' 00:16:05.222 18:47:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:05.222 18:47:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:05.222 18:47:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:05.222 18:47:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:05.222 18:47:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:05.790 [2024-12-15 18:47:06.206898] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:05.790 [2024-12-15 18:47:06.206966] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:05.790 [2024-12-15 18:47:06.207075] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:06.358 18:47:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:06.358 18:47:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:06.358 18:47:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:06.358 18:47:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:06.358 18:47:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:06.358 18:47:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:06.358 18:47:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.358 18:47:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:06.358 18:47:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.358 18:47:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:06.358 18:47:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.358 18:47:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:06.358 "name": "raid_bdev1", 00:16:06.358 "uuid": "0708fe05-65cc-430a-99b4-82e6170e1a6f", 00:16:06.358 "strip_size_kb": 0, 00:16:06.358 "state": "online", 00:16:06.358 "raid_level": "raid1", 00:16:06.358 "superblock": true, 00:16:06.358 "num_base_bdevs": 2, 00:16:06.358 "num_base_bdevs_discovered": 2, 00:16:06.358 "num_base_bdevs_operational": 2, 00:16:06.358 "base_bdevs_list": [ 00:16:06.358 { 00:16:06.358 "name": "spare", 00:16:06.358 "uuid": "b2a69f0b-9dc7-5b37-aa85-cae885a83aeb", 00:16:06.358 "is_configured": true, 00:16:06.358 "data_offset": 256, 00:16:06.358 "data_size": 7936 00:16:06.358 }, 00:16:06.358 { 00:16:06.358 "name": "BaseBdev2", 00:16:06.358 "uuid": "b486ecc0-8155-5504-ae4c-3d6fda468b26", 00:16:06.358 "is_configured": true, 00:16:06.358 "data_offset": 256, 00:16:06.358 "data_size": 7936 00:16:06.358 } 00:16:06.358 ] 00:16:06.358 }' 00:16:06.358 18:47:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:06.358 18:47:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:06.358 18:47:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:06.358 18:47:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:06.358 18:47:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@709 -- # break 00:16:06.358 18:47:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:06.358 18:47:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:06.358 18:47:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:06.358 18:47:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:06.358 18:47:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:06.358 18:47:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:06.358 18:47:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.358 18:47:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.358 18:47:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:06.358 18:47:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.358 18:47:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:06.358 "name": "raid_bdev1", 00:16:06.358 "uuid": "0708fe05-65cc-430a-99b4-82e6170e1a6f", 00:16:06.358 "strip_size_kb": 0, 00:16:06.358 "state": "online", 00:16:06.358 "raid_level": "raid1", 00:16:06.358 "superblock": true, 00:16:06.358 "num_base_bdevs": 2, 00:16:06.358 "num_base_bdevs_discovered": 2, 00:16:06.358 "num_base_bdevs_operational": 2, 00:16:06.358 "base_bdevs_list": [ 00:16:06.358 { 00:16:06.358 "name": "spare", 00:16:06.358 "uuid": "b2a69f0b-9dc7-5b37-aa85-cae885a83aeb", 00:16:06.358 "is_configured": true, 00:16:06.358 "data_offset": 256, 00:16:06.358 "data_size": 7936 00:16:06.358 }, 00:16:06.358 { 00:16:06.358 "name": "BaseBdev2", 00:16:06.358 "uuid": "b486ecc0-8155-5504-ae4c-3d6fda468b26", 00:16:06.358 "is_configured": true, 00:16:06.358 "data_offset": 256, 00:16:06.358 "data_size": 7936 00:16:06.358 } 00:16:06.358 ] 00:16:06.358 }' 00:16:06.358 18:47:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:06.358 18:47:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:06.359 18:47:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:06.359 18:47:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:06.359 18:47:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:06.359 18:47:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:06.359 18:47:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:06.359 18:47:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:06.359 18:47:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:06.359 18:47:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:06.359 18:47:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:06.359 18:47:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:06.359 18:47:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:06.359 18:47:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:06.359 18:47:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:06.359 18:47:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.359 18:47:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.359 18:47:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:06.359 18:47:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.359 18:47:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:06.359 "name": "raid_bdev1", 00:16:06.359 "uuid": "0708fe05-65cc-430a-99b4-82e6170e1a6f", 00:16:06.359 "strip_size_kb": 0, 00:16:06.359 "state": "online", 00:16:06.359 "raid_level": "raid1", 00:16:06.359 "superblock": true, 00:16:06.359 "num_base_bdevs": 2, 00:16:06.359 "num_base_bdevs_discovered": 2, 00:16:06.359 "num_base_bdevs_operational": 2, 00:16:06.359 "base_bdevs_list": [ 00:16:06.359 { 00:16:06.359 "name": "spare", 00:16:06.359 "uuid": "b2a69f0b-9dc7-5b37-aa85-cae885a83aeb", 00:16:06.359 "is_configured": true, 00:16:06.359 "data_offset": 256, 00:16:06.359 "data_size": 7936 00:16:06.359 }, 00:16:06.359 { 00:16:06.359 "name": "BaseBdev2", 00:16:06.359 "uuid": "b486ecc0-8155-5504-ae4c-3d6fda468b26", 00:16:06.359 "is_configured": true, 00:16:06.359 "data_offset": 256, 00:16:06.359 "data_size": 7936 00:16:06.359 } 00:16:06.359 ] 00:16:06.359 }' 00:16:06.359 18:47:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:06.359 18:47:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:06.927 18:47:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:06.927 18:47:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.927 18:47:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:06.927 [2024-12-15 18:47:07.137190] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:06.927 [2024-12-15 18:47:07.137283] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:06.927 [2024-12-15 18:47:07.137375] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:06.927 [2024-12-15 18:47:07.137459] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:06.927 [2024-12-15 18:47:07.137515] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:16:06.927 18:47:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.927 18:47:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # jq length 00:16:06.927 18:47:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.927 18:47:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.927 18:47:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:06.927 18:47:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.927 18:47:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:06.927 18:47:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:16:06.927 18:47:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:16:06.927 18:47:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:16:06.927 18:47:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.927 18:47:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:06.927 18:47:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.927 18:47:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:06.927 18:47:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.927 18:47:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:06.927 [2024-12-15 18:47:07.193087] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:06.927 [2024-12-15 18:47:07.193144] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:06.927 [2024-12-15 18:47:07.193163] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:16:06.927 [2024-12-15 18:47:07.193174] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:06.927 [2024-12-15 18:47:07.195086] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:06.927 [2024-12-15 18:47:07.195136] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:06.927 [2024-12-15 18:47:07.195183] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:06.927 [2024-12-15 18:47:07.195228] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:06.927 [2024-12-15 18:47:07.195322] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:06.927 spare 00:16:06.927 18:47:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.927 18:47:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:16:06.927 18:47:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.927 18:47:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:06.927 [2024-12-15 18:47:07.295203] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:16:06.927 [2024-12-15 18:47:07.295227] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:16:06.927 [2024-12-15 18:47:07.295324] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:06.927 [2024-12-15 18:47:07.295404] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:16:06.927 [2024-12-15 18:47:07.295417] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006600 00:16:06.927 [2024-12-15 18:47:07.295481] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:06.927 18:47:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.927 18:47:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:06.927 18:47:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:06.927 18:47:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:06.927 18:47:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:06.927 18:47:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:06.927 18:47:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:06.927 18:47:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:06.927 18:47:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:06.927 18:47:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:06.927 18:47:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:06.927 18:47:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.927 18:47:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:06.927 18:47:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.927 18:47:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:06.927 18:47:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.927 18:47:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:06.927 "name": "raid_bdev1", 00:16:06.927 "uuid": "0708fe05-65cc-430a-99b4-82e6170e1a6f", 00:16:06.927 "strip_size_kb": 0, 00:16:06.927 "state": "online", 00:16:06.927 "raid_level": "raid1", 00:16:06.927 "superblock": true, 00:16:06.927 "num_base_bdevs": 2, 00:16:06.927 "num_base_bdevs_discovered": 2, 00:16:06.927 "num_base_bdevs_operational": 2, 00:16:06.927 "base_bdevs_list": [ 00:16:06.927 { 00:16:06.927 "name": "spare", 00:16:06.927 "uuid": "b2a69f0b-9dc7-5b37-aa85-cae885a83aeb", 00:16:06.927 "is_configured": true, 00:16:06.927 "data_offset": 256, 00:16:06.927 "data_size": 7936 00:16:06.927 }, 00:16:06.927 { 00:16:06.927 "name": "BaseBdev2", 00:16:06.927 "uuid": "b486ecc0-8155-5504-ae4c-3d6fda468b26", 00:16:06.927 "is_configured": true, 00:16:06.927 "data_offset": 256, 00:16:06.927 "data_size": 7936 00:16:06.927 } 00:16:06.927 ] 00:16:06.927 }' 00:16:06.927 18:47:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:06.927 18:47:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:07.496 18:47:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:07.496 18:47:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:07.496 18:47:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:07.496 18:47:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:07.496 18:47:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:07.496 18:47:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:07.496 18:47:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:07.496 18:47:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.496 18:47:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:07.496 18:47:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.496 18:47:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:07.496 "name": "raid_bdev1", 00:16:07.496 "uuid": "0708fe05-65cc-430a-99b4-82e6170e1a6f", 00:16:07.496 "strip_size_kb": 0, 00:16:07.496 "state": "online", 00:16:07.496 "raid_level": "raid1", 00:16:07.496 "superblock": true, 00:16:07.496 "num_base_bdevs": 2, 00:16:07.496 "num_base_bdevs_discovered": 2, 00:16:07.496 "num_base_bdevs_operational": 2, 00:16:07.496 "base_bdevs_list": [ 00:16:07.496 { 00:16:07.496 "name": "spare", 00:16:07.496 "uuid": "b2a69f0b-9dc7-5b37-aa85-cae885a83aeb", 00:16:07.496 "is_configured": true, 00:16:07.496 "data_offset": 256, 00:16:07.496 "data_size": 7936 00:16:07.496 }, 00:16:07.496 { 00:16:07.496 "name": "BaseBdev2", 00:16:07.496 "uuid": "b486ecc0-8155-5504-ae4c-3d6fda468b26", 00:16:07.496 "is_configured": true, 00:16:07.496 "data_offset": 256, 00:16:07.496 "data_size": 7936 00:16:07.496 } 00:16:07.496 ] 00:16:07.496 }' 00:16:07.496 18:47:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:07.496 18:47:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:07.496 18:47:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:07.496 18:47:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:07.496 18:47:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:07.496 18:47:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.496 18:47:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:07.496 18:47:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:16:07.496 18:47:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.756 18:47:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:16:07.756 18:47:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:07.756 18:47:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.756 18:47:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:07.756 [2024-12-15 18:47:07.944415] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:07.756 18:47:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.756 18:47:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:07.756 18:47:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:07.756 18:47:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:07.756 18:47:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:07.756 18:47:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:07.756 18:47:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:07.756 18:47:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:07.756 18:47:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:07.756 18:47:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:07.756 18:47:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:07.756 18:47:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:07.756 18:47:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:07.756 18:47:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.756 18:47:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:07.756 18:47:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.756 18:47:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:07.756 "name": "raid_bdev1", 00:16:07.756 "uuid": "0708fe05-65cc-430a-99b4-82e6170e1a6f", 00:16:07.756 "strip_size_kb": 0, 00:16:07.756 "state": "online", 00:16:07.756 "raid_level": "raid1", 00:16:07.756 "superblock": true, 00:16:07.756 "num_base_bdevs": 2, 00:16:07.756 "num_base_bdevs_discovered": 1, 00:16:07.756 "num_base_bdevs_operational": 1, 00:16:07.756 "base_bdevs_list": [ 00:16:07.756 { 00:16:07.756 "name": null, 00:16:07.756 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:07.756 "is_configured": false, 00:16:07.756 "data_offset": 0, 00:16:07.756 "data_size": 7936 00:16:07.756 }, 00:16:07.756 { 00:16:07.756 "name": "BaseBdev2", 00:16:07.756 "uuid": "b486ecc0-8155-5504-ae4c-3d6fda468b26", 00:16:07.756 "is_configured": true, 00:16:07.756 "data_offset": 256, 00:16:07.756 "data_size": 7936 00:16:07.756 } 00:16:07.756 ] 00:16:07.756 }' 00:16:07.756 18:47:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:07.756 18:47:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:08.015 18:47:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:08.015 18:47:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.015 18:47:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:08.015 [2024-12-15 18:47:08.303764] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:08.016 [2024-12-15 18:47:08.303979] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:08.016 [2024-12-15 18:47:08.304039] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:08.016 [2024-12-15 18:47:08.304093] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:08.016 [2024-12-15 18:47:08.307639] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:08.016 [2024-12-15 18:47:08.309463] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:08.016 18:47:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.016 18:47:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@757 -- # sleep 1 00:16:08.954 18:47:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:08.954 18:47:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:08.954 18:47:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:08.954 18:47:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:08.954 18:47:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:08.954 18:47:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.954 18:47:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.954 18:47:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:08.954 18:47:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:08.954 18:47:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.954 18:47:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:08.954 "name": "raid_bdev1", 00:16:08.954 "uuid": "0708fe05-65cc-430a-99b4-82e6170e1a6f", 00:16:08.954 "strip_size_kb": 0, 00:16:08.954 "state": "online", 00:16:08.954 "raid_level": "raid1", 00:16:08.954 "superblock": true, 00:16:08.954 "num_base_bdevs": 2, 00:16:08.954 "num_base_bdevs_discovered": 2, 00:16:08.954 "num_base_bdevs_operational": 2, 00:16:08.954 "process": { 00:16:08.954 "type": "rebuild", 00:16:08.954 "target": "spare", 00:16:08.954 "progress": { 00:16:08.954 "blocks": 2560, 00:16:08.954 "percent": 32 00:16:08.954 } 00:16:08.954 }, 00:16:08.954 "base_bdevs_list": [ 00:16:08.954 { 00:16:08.954 "name": "spare", 00:16:08.954 "uuid": "b2a69f0b-9dc7-5b37-aa85-cae885a83aeb", 00:16:08.954 "is_configured": true, 00:16:08.954 "data_offset": 256, 00:16:08.954 "data_size": 7936 00:16:08.954 }, 00:16:08.954 { 00:16:08.954 "name": "BaseBdev2", 00:16:08.954 "uuid": "b486ecc0-8155-5504-ae4c-3d6fda468b26", 00:16:08.954 "is_configured": true, 00:16:08.954 "data_offset": 256, 00:16:08.954 "data_size": 7936 00:16:08.954 } 00:16:08.954 ] 00:16:08.954 }' 00:16:08.954 18:47:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:09.213 18:47:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:09.213 18:47:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:09.214 18:47:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:09.214 18:47:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:16:09.214 18:47:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.214 18:47:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:09.214 [2024-12-15 18:47:09.476971] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:09.214 [2024-12-15 18:47:09.513470] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:09.214 [2024-12-15 18:47:09.513572] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:09.214 [2024-12-15 18:47:09.513624] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:09.214 [2024-12-15 18:47:09.513645] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:09.214 18:47:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.214 18:47:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:09.214 18:47:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:09.214 18:47:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:09.214 18:47:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:09.214 18:47:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:09.214 18:47:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:09.214 18:47:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:09.214 18:47:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:09.214 18:47:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:09.214 18:47:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:09.214 18:47:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.214 18:47:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:09.214 18:47:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.214 18:47:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:09.214 18:47:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.214 18:47:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:09.214 "name": "raid_bdev1", 00:16:09.214 "uuid": "0708fe05-65cc-430a-99b4-82e6170e1a6f", 00:16:09.214 "strip_size_kb": 0, 00:16:09.214 "state": "online", 00:16:09.214 "raid_level": "raid1", 00:16:09.214 "superblock": true, 00:16:09.214 "num_base_bdevs": 2, 00:16:09.214 "num_base_bdevs_discovered": 1, 00:16:09.214 "num_base_bdevs_operational": 1, 00:16:09.214 "base_bdevs_list": [ 00:16:09.214 { 00:16:09.214 "name": null, 00:16:09.214 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:09.214 "is_configured": false, 00:16:09.214 "data_offset": 0, 00:16:09.214 "data_size": 7936 00:16:09.214 }, 00:16:09.214 { 00:16:09.214 "name": "BaseBdev2", 00:16:09.214 "uuid": "b486ecc0-8155-5504-ae4c-3d6fda468b26", 00:16:09.214 "is_configured": true, 00:16:09.214 "data_offset": 256, 00:16:09.214 "data_size": 7936 00:16:09.214 } 00:16:09.214 ] 00:16:09.214 }' 00:16:09.214 18:47:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:09.214 18:47:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:09.783 18:47:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:09.783 18:47:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.783 18:47:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:09.783 [2024-12-15 18:47:09.992928] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:09.783 [2024-12-15 18:47:09.992990] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:09.783 [2024-12-15 18:47:09.993019] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:16:09.783 [2024-12-15 18:47:09.993029] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:09.783 [2024-12-15 18:47:09.993198] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:09.783 [2024-12-15 18:47:09.993211] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:09.783 [2024-12-15 18:47:09.993263] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:09.783 [2024-12-15 18:47:09.993274] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:09.783 [2024-12-15 18:47:09.993285] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:09.783 [2024-12-15 18:47:09.993308] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:09.783 [2024-12-15 18:47:09.996120] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:16:09.783 [2024-12-15 18:47:09.998019] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:09.783 spare 00:16:09.783 18:47:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.783 18:47:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@764 -- # sleep 1 00:16:10.721 18:47:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:10.721 18:47:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:10.721 18:47:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:10.721 18:47:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:10.721 18:47:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:10.721 18:47:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.721 18:47:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:10.721 18:47:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.721 18:47:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:10.721 18:47:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.721 18:47:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:10.721 "name": "raid_bdev1", 00:16:10.721 "uuid": "0708fe05-65cc-430a-99b4-82e6170e1a6f", 00:16:10.721 "strip_size_kb": 0, 00:16:10.721 "state": "online", 00:16:10.721 "raid_level": "raid1", 00:16:10.721 "superblock": true, 00:16:10.721 "num_base_bdevs": 2, 00:16:10.721 "num_base_bdevs_discovered": 2, 00:16:10.721 "num_base_bdevs_operational": 2, 00:16:10.721 "process": { 00:16:10.721 "type": "rebuild", 00:16:10.721 "target": "spare", 00:16:10.721 "progress": { 00:16:10.721 "blocks": 2560, 00:16:10.721 "percent": 32 00:16:10.721 } 00:16:10.721 }, 00:16:10.721 "base_bdevs_list": [ 00:16:10.721 { 00:16:10.721 "name": "spare", 00:16:10.721 "uuid": "b2a69f0b-9dc7-5b37-aa85-cae885a83aeb", 00:16:10.721 "is_configured": true, 00:16:10.721 "data_offset": 256, 00:16:10.721 "data_size": 7936 00:16:10.721 }, 00:16:10.721 { 00:16:10.721 "name": "BaseBdev2", 00:16:10.721 "uuid": "b486ecc0-8155-5504-ae4c-3d6fda468b26", 00:16:10.721 "is_configured": true, 00:16:10.721 "data_offset": 256, 00:16:10.721 "data_size": 7936 00:16:10.721 } 00:16:10.721 ] 00:16:10.721 }' 00:16:10.721 18:47:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:10.721 18:47:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:10.721 18:47:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:10.721 18:47:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:10.721 18:47:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:16:10.721 18:47:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.722 18:47:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:10.981 [2024-12-15 18:47:11.164954] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:10.981 [2024-12-15 18:47:11.201955] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:10.982 [2024-12-15 18:47:11.202061] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:10.982 [2024-12-15 18:47:11.202094] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:10.982 [2024-12-15 18:47:11.202116] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:10.982 18:47:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.982 18:47:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:10.982 18:47:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:10.982 18:47:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:10.982 18:47:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:10.982 18:47:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:10.982 18:47:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:10.982 18:47:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:10.982 18:47:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:10.982 18:47:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:10.982 18:47:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:10.982 18:47:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.982 18:47:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:10.982 18:47:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.982 18:47:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:10.982 18:47:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.982 18:47:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:10.982 "name": "raid_bdev1", 00:16:10.982 "uuid": "0708fe05-65cc-430a-99b4-82e6170e1a6f", 00:16:10.982 "strip_size_kb": 0, 00:16:10.982 "state": "online", 00:16:10.982 "raid_level": "raid1", 00:16:10.982 "superblock": true, 00:16:10.982 "num_base_bdevs": 2, 00:16:10.982 "num_base_bdevs_discovered": 1, 00:16:10.982 "num_base_bdevs_operational": 1, 00:16:10.982 "base_bdevs_list": [ 00:16:10.982 { 00:16:10.982 "name": null, 00:16:10.982 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:10.982 "is_configured": false, 00:16:10.982 "data_offset": 0, 00:16:10.982 "data_size": 7936 00:16:10.982 }, 00:16:10.982 { 00:16:10.982 "name": "BaseBdev2", 00:16:10.982 "uuid": "b486ecc0-8155-5504-ae4c-3d6fda468b26", 00:16:10.982 "is_configured": true, 00:16:10.982 "data_offset": 256, 00:16:10.982 "data_size": 7936 00:16:10.982 } 00:16:10.982 ] 00:16:10.982 }' 00:16:10.982 18:47:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:10.982 18:47:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:11.241 18:47:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:11.241 18:47:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:11.241 18:47:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:11.241 18:47:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:11.241 18:47:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:11.241 18:47:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.241 18:47:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:11.241 18:47:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.241 18:47:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:11.241 18:47:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.241 18:47:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:11.241 "name": "raid_bdev1", 00:16:11.241 "uuid": "0708fe05-65cc-430a-99b4-82e6170e1a6f", 00:16:11.241 "strip_size_kb": 0, 00:16:11.241 "state": "online", 00:16:11.241 "raid_level": "raid1", 00:16:11.241 "superblock": true, 00:16:11.241 "num_base_bdevs": 2, 00:16:11.241 "num_base_bdevs_discovered": 1, 00:16:11.241 "num_base_bdevs_operational": 1, 00:16:11.241 "base_bdevs_list": [ 00:16:11.241 { 00:16:11.241 "name": null, 00:16:11.241 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:11.241 "is_configured": false, 00:16:11.241 "data_offset": 0, 00:16:11.241 "data_size": 7936 00:16:11.241 }, 00:16:11.241 { 00:16:11.241 "name": "BaseBdev2", 00:16:11.241 "uuid": "b486ecc0-8155-5504-ae4c-3d6fda468b26", 00:16:11.241 "is_configured": true, 00:16:11.241 "data_offset": 256, 00:16:11.241 "data_size": 7936 00:16:11.241 } 00:16:11.241 ] 00:16:11.241 }' 00:16:11.241 18:47:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:11.501 18:47:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:11.501 18:47:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:11.501 18:47:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:11.501 18:47:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:16:11.501 18:47:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.501 18:47:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:11.501 18:47:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.501 18:47:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:11.501 18:47:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.501 18:47:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:11.501 [2024-12-15 18:47:11.740901] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:11.501 [2024-12-15 18:47:11.740960] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:11.501 [2024-12-15 18:47:11.740978] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:16:11.501 [2024-12-15 18:47:11.740988] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:11.501 [2024-12-15 18:47:11.741134] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:11.501 [2024-12-15 18:47:11.741150] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:11.501 [2024-12-15 18:47:11.741192] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:16:11.501 [2024-12-15 18:47:11.741205] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:11.501 [2024-12-15 18:47:11.741219] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:11.501 [2024-12-15 18:47:11.741233] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:16:11.501 BaseBdev1 00:16:11.501 18:47:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.501 18:47:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # sleep 1 00:16:12.439 18:47:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:12.439 18:47:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:12.439 18:47:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:12.439 18:47:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:12.439 18:47:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:12.439 18:47:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:12.439 18:47:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:12.439 18:47:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:12.440 18:47:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:12.440 18:47:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:12.440 18:47:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.440 18:47:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:12.440 18:47:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.440 18:47:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:12.440 18:47:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.440 18:47:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:12.440 "name": "raid_bdev1", 00:16:12.440 "uuid": "0708fe05-65cc-430a-99b4-82e6170e1a6f", 00:16:12.440 "strip_size_kb": 0, 00:16:12.440 "state": "online", 00:16:12.440 "raid_level": "raid1", 00:16:12.440 "superblock": true, 00:16:12.440 "num_base_bdevs": 2, 00:16:12.440 "num_base_bdevs_discovered": 1, 00:16:12.440 "num_base_bdevs_operational": 1, 00:16:12.440 "base_bdevs_list": [ 00:16:12.440 { 00:16:12.440 "name": null, 00:16:12.440 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:12.440 "is_configured": false, 00:16:12.440 "data_offset": 0, 00:16:12.440 "data_size": 7936 00:16:12.440 }, 00:16:12.440 { 00:16:12.440 "name": "BaseBdev2", 00:16:12.440 "uuid": "b486ecc0-8155-5504-ae4c-3d6fda468b26", 00:16:12.440 "is_configured": true, 00:16:12.440 "data_offset": 256, 00:16:12.440 "data_size": 7936 00:16:12.440 } 00:16:12.440 ] 00:16:12.440 }' 00:16:12.440 18:47:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:12.440 18:47:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:13.008 18:47:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:13.008 18:47:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:13.008 18:47:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:13.008 18:47:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:13.008 18:47:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:13.008 18:47:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:13.008 18:47:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:13.008 18:47:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.008 18:47:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:13.008 18:47:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.008 18:47:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:13.008 "name": "raid_bdev1", 00:16:13.008 "uuid": "0708fe05-65cc-430a-99b4-82e6170e1a6f", 00:16:13.008 "strip_size_kb": 0, 00:16:13.008 "state": "online", 00:16:13.008 "raid_level": "raid1", 00:16:13.008 "superblock": true, 00:16:13.008 "num_base_bdevs": 2, 00:16:13.008 "num_base_bdevs_discovered": 1, 00:16:13.008 "num_base_bdevs_operational": 1, 00:16:13.008 "base_bdevs_list": [ 00:16:13.008 { 00:16:13.008 "name": null, 00:16:13.008 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:13.008 "is_configured": false, 00:16:13.008 "data_offset": 0, 00:16:13.008 "data_size": 7936 00:16:13.008 }, 00:16:13.008 { 00:16:13.008 "name": "BaseBdev2", 00:16:13.008 "uuid": "b486ecc0-8155-5504-ae4c-3d6fda468b26", 00:16:13.008 "is_configured": true, 00:16:13.008 "data_offset": 256, 00:16:13.008 "data_size": 7936 00:16:13.008 } 00:16:13.008 ] 00:16:13.008 }' 00:16:13.008 18:47:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:13.009 18:47:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:13.009 18:47:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:13.009 18:47:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:13.009 18:47:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:13.009 18:47:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:16:13.009 18:47:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:13.009 18:47:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:13.009 18:47:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:13.009 18:47:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:13.009 18:47:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:13.009 18:47:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:13.009 18:47:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.009 18:47:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:13.009 [2024-12-15 18:47:13.350128] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:13.009 [2024-12-15 18:47:13.350291] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:13.009 [2024-12-15 18:47:13.350307] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:13.009 request: 00:16:13.009 { 00:16:13.009 "base_bdev": "BaseBdev1", 00:16:13.009 "raid_bdev": "raid_bdev1", 00:16:13.009 "method": "bdev_raid_add_base_bdev", 00:16:13.009 "req_id": 1 00:16:13.009 } 00:16:13.009 Got JSON-RPC error response 00:16:13.009 response: 00:16:13.009 { 00:16:13.009 "code": -22, 00:16:13.009 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:16:13.009 } 00:16:13.009 18:47:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:13.009 18:47:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:16:13.009 18:47:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:13.009 18:47:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:13.009 18:47:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:13.009 18:47:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # sleep 1 00:16:13.995 18:47:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:13.995 18:47:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:13.995 18:47:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:13.995 18:47:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:13.995 18:47:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:13.996 18:47:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:13.996 18:47:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:13.996 18:47:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:13.996 18:47:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:13.996 18:47:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:13.996 18:47:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:13.996 18:47:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:13.996 18:47:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.996 18:47:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:13.996 18:47:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.996 18:47:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:13.996 "name": "raid_bdev1", 00:16:13.996 "uuid": "0708fe05-65cc-430a-99b4-82e6170e1a6f", 00:16:13.996 "strip_size_kb": 0, 00:16:13.996 "state": "online", 00:16:13.996 "raid_level": "raid1", 00:16:13.996 "superblock": true, 00:16:13.996 "num_base_bdevs": 2, 00:16:13.996 "num_base_bdevs_discovered": 1, 00:16:13.996 "num_base_bdevs_operational": 1, 00:16:13.996 "base_bdevs_list": [ 00:16:13.996 { 00:16:13.996 "name": null, 00:16:13.996 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:13.996 "is_configured": false, 00:16:13.996 "data_offset": 0, 00:16:13.996 "data_size": 7936 00:16:13.996 }, 00:16:13.996 { 00:16:13.996 "name": "BaseBdev2", 00:16:13.996 "uuid": "b486ecc0-8155-5504-ae4c-3d6fda468b26", 00:16:13.996 "is_configured": true, 00:16:13.996 "data_offset": 256, 00:16:13.996 "data_size": 7936 00:16:13.996 } 00:16:13.996 ] 00:16:13.996 }' 00:16:13.996 18:47:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:13.996 18:47:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:14.565 18:47:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:14.565 18:47:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:14.565 18:47:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:14.565 18:47:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:14.565 18:47:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:14.565 18:47:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:14.565 18:47:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.565 18:47:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:14.565 18:47:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:14.565 18:47:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.565 18:47:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:14.565 "name": "raid_bdev1", 00:16:14.565 "uuid": "0708fe05-65cc-430a-99b4-82e6170e1a6f", 00:16:14.565 "strip_size_kb": 0, 00:16:14.565 "state": "online", 00:16:14.565 "raid_level": "raid1", 00:16:14.565 "superblock": true, 00:16:14.565 "num_base_bdevs": 2, 00:16:14.565 "num_base_bdevs_discovered": 1, 00:16:14.565 "num_base_bdevs_operational": 1, 00:16:14.565 "base_bdevs_list": [ 00:16:14.565 { 00:16:14.565 "name": null, 00:16:14.565 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:14.565 "is_configured": false, 00:16:14.565 "data_offset": 0, 00:16:14.565 "data_size": 7936 00:16:14.565 }, 00:16:14.565 { 00:16:14.565 "name": "BaseBdev2", 00:16:14.565 "uuid": "b486ecc0-8155-5504-ae4c-3d6fda468b26", 00:16:14.565 "is_configured": true, 00:16:14.565 "data_offset": 256, 00:16:14.565 "data_size": 7936 00:16:14.565 } 00:16:14.565 ] 00:16:14.565 }' 00:16:14.565 18:47:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:14.565 18:47:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:14.565 18:47:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:14.565 18:47:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:14.565 18:47:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # killprocess 101257 00:16:14.565 18:47:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 101257 ']' 00:16:14.565 18:47:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 101257 00:16:14.565 18:47:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:16:14.565 18:47:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:14.565 18:47:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 101257 00:16:14.565 18:47:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:14.565 18:47:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:14.565 18:47:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 101257' 00:16:14.565 killing process with pid 101257 00:16:14.565 18:47:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 101257 00:16:14.565 Received shutdown signal, test time was about 60.000000 seconds 00:16:14.565 00:16:14.565 Latency(us) 00:16:14.565 [2024-12-15T18:47:15.006Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:14.565 [2024-12-15T18:47:15.006Z] =================================================================================================================== 00:16:14.565 [2024-12-15T18:47:15.006Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:14.565 [2024-12-15 18:47:14.977618] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:14.566 [2024-12-15 18:47:14.977713] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:14.566 [2024-12-15 18:47:14.977755] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:14.566 [2024-12-15 18:47:14.977764] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state offline 00:16:14.566 18:47:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 101257 00:16:14.827 [2024-12-15 18:47:15.012112] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:14.827 18:47:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@786 -- # return 0 00:16:14.827 00:16:14.827 real 0m15.997s 00:16:14.827 user 0m21.355s 00:16:14.827 sys 0m1.631s 00:16:14.827 ************************************ 00:16:14.827 END TEST raid_rebuild_test_sb_md_interleaved 00:16:14.827 ************************************ 00:16:14.827 18:47:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:14.827 18:47:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:15.086 18:47:15 bdev_raid -- bdev/bdev_raid.sh@1015 -- # trap - EXIT 00:16:15.086 18:47:15 bdev_raid -- bdev/bdev_raid.sh@1016 -- # cleanup 00:16:15.086 18:47:15 bdev_raid -- bdev/bdev_raid.sh@56 -- # '[' -n 101257 ']' 00:16:15.086 18:47:15 bdev_raid -- bdev/bdev_raid.sh@56 -- # ps -p 101257 00:16:15.086 18:47:15 bdev_raid -- bdev/bdev_raid.sh@60 -- # rm -rf /raidtest 00:16:15.086 ************************************ 00:16:15.086 END TEST bdev_raid 00:16:15.086 ************************************ 00:16:15.086 00:16:15.086 real 10m2.217s 00:16:15.086 user 14m10.835s 00:16:15.086 sys 1m52.927s 00:16:15.086 18:47:15 bdev_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:15.086 18:47:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:15.086 18:47:15 -- spdk/autotest.sh@190 -- # run_test spdkcli_raid /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:16:15.086 18:47:15 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:15.086 18:47:15 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:15.086 18:47:15 -- common/autotest_common.sh@10 -- # set +x 00:16:15.086 ************************************ 00:16:15.086 START TEST spdkcli_raid 00:16:15.086 ************************************ 00:16:15.086 18:47:15 spdkcli_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:16:15.087 * Looking for test storage... 00:16:15.087 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:16:15.087 18:47:15 spdkcli_raid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:15.087 18:47:15 spdkcli_raid -- common/autotest_common.sh@1711 -- # lcov --version 00:16:15.087 18:47:15 spdkcli_raid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:15.346 18:47:15 spdkcli_raid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:15.346 18:47:15 spdkcli_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:15.346 18:47:15 spdkcli_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:15.346 18:47:15 spdkcli_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:15.346 18:47:15 spdkcli_raid -- scripts/common.sh@336 -- # IFS=.-: 00:16:15.346 18:47:15 spdkcli_raid -- scripts/common.sh@336 -- # read -ra ver1 00:16:15.346 18:47:15 spdkcli_raid -- scripts/common.sh@337 -- # IFS=.-: 00:16:15.346 18:47:15 spdkcli_raid -- scripts/common.sh@337 -- # read -ra ver2 00:16:15.346 18:47:15 spdkcli_raid -- scripts/common.sh@338 -- # local 'op=<' 00:16:15.346 18:47:15 spdkcli_raid -- scripts/common.sh@340 -- # ver1_l=2 00:16:15.346 18:47:15 spdkcli_raid -- scripts/common.sh@341 -- # ver2_l=1 00:16:15.346 18:47:15 spdkcli_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:15.346 18:47:15 spdkcli_raid -- scripts/common.sh@344 -- # case "$op" in 00:16:15.346 18:47:15 spdkcli_raid -- scripts/common.sh@345 -- # : 1 00:16:15.346 18:47:15 spdkcli_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:15.346 18:47:15 spdkcli_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:15.346 18:47:15 spdkcli_raid -- scripts/common.sh@365 -- # decimal 1 00:16:15.346 18:47:15 spdkcli_raid -- scripts/common.sh@353 -- # local d=1 00:16:15.346 18:47:15 spdkcli_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:15.347 18:47:15 spdkcli_raid -- scripts/common.sh@355 -- # echo 1 00:16:15.347 18:47:15 spdkcli_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:16:15.347 18:47:15 spdkcli_raid -- scripts/common.sh@366 -- # decimal 2 00:16:15.347 18:47:15 spdkcli_raid -- scripts/common.sh@353 -- # local d=2 00:16:15.347 18:47:15 spdkcli_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:15.347 18:47:15 spdkcli_raid -- scripts/common.sh@355 -- # echo 2 00:16:15.347 18:47:15 spdkcli_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:16:15.347 18:47:15 spdkcli_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:15.347 18:47:15 spdkcli_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:15.347 18:47:15 spdkcli_raid -- scripts/common.sh@368 -- # return 0 00:16:15.347 18:47:15 spdkcli_raid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:15.347 18:47:15 spdkcli_raid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:15.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:15.347 --rc genhtml_branch_coverage=1 00:16:15.347 --rc genhtml_function_coverage=1 00:16:15.347 --rc genhtml_legend=1 00:16:15.347 --rc geninfo_all_blocks=1 00:16:15.347 --rc geninfo_unexecuted_blocks=1 00:16:15.347 00:16:15.347 ' 00:16:15.347 18:47:15 spdkcli_raid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:15.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:15.347 --rc genhtml_branch_coverage=1 00:16:15.347 --rc genhtml_function_coverage=1 00:16:15.347 --rc genhtml_legend=1 00:16:15.347 --rc geninfo_all_blocks=1 00:16:15.347 --rc geninfo_unexecuted_blocks=1 00:16:15.347 00:16:15.347 ' 00:16:15.347 18:47:15 spdkcli_raid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:15.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:15.347 --rc genhtml_branch_coverage=1 00:16:15.347 --rc genhtml_function_coverage=1 00:16:15.347 --rc genhtml_legend=1 00:16:15.347 --rc geninfo_all_blocks=1 00:16:15.347 --rc geninfo_unexecuted_blocks=1 00:16:15.347 00:16:15.347 ' 00:16:15.347 18:47:15 spdkcli_raid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:15.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:15.347 --rc genhtml_branch_coverage=1 00:16:15.347 --rc genhtml_function_coverage=1 00:16:15.347 --rc genhtml_legend=1 00:16:15.347 --rc geninfo_all_blocks=1 00:16:15.347 --rc geninfo_unexecuted_blocks=1 00:16:15.347 00:16:15.347 ' 00:16:15.347 18:47:15 spdkcli_raid -- spdkcli/raid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:16:15.347 18:47:15 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:16:15.347 18:47:15 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:16:15.347 18:47:15 spdkcli_raid -- spdkcli/raid.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:16:15.347 18:47:15 spdkcli_raid -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:16:15.347 18:47:15 spdkcli_raid -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:16:15.347 18:47:15 spdkcli_raid -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:16:15.347 18:47:15 spdkcli_raid -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:16:15.347 18:47:15 spdkcli_raid -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:16:15.347 18:47:15 spdkcli_raid -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:16:15.347 18:47:15 spdkcli_raid -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:16:15.347 18:47:15 spdkcli_raid -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:16:15.347 18:47:15 spdkcli_raid -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:16:15.347 18:47:15 spdkcli_raid -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:16:15.347 18:47:15 spdkcli_raid -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:16:15.347 18:47:15 spdkcli_raid -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:16:15.347 18:47:15 spdkcli_raid -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:16:15.347 18:47:15 spdkcli_raid -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:16:15.347 18:47:15 spdkcli_raid -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:16:15.347 18:47:15 spdkcli_raid -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:16:15.347 18:47:15 spdkcli_raid -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:16:15.347 18:47:15 spdkcli_raid -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:16:15.347 18:47:15 spdkcli_raid -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:16:15.347 18:47:15 spdkcli_raid -- spdkcli/raid.sh@12 -- # MATCH_FILE=spdkcli_raid.test 00:16:15.347 18:47:15 spdkcli_raid -- spdkcli/raid.sh@13 -- # SPDKCLI_BRANCH=/bdevs 00:16:15.347 18:47:15 spdkcli_raid -- spdkcli/raid.sh@14 -- # dirname /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:16:15.347 18:47:15 spdkcli_raid -- spdkcli/raid.sh@14 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/spdkcli 00:16:15.347 18:47:15 spdkcli_raid -- spdkcli/raid.sh@14 -- # testdir=/home/vagrant/spdk_repo/spdk/test/spdkcli 00:16:15.347 18:47:15 spdkcli_raid -- spdkcli/raid.sh@15 -- # . /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:16:15.347 18:47:15 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:16:15.347 18:47:15 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:16:15.347 18:47:15 spdkcli_raid -- spdkcli/raid.sh@17 -- # trap cleanup EXIT 00:16:15.347 18:47:15 spdkcli_raid -- spdkcli/raid.sh@19 -- # timing_enter run_spdk_tgt 00:16:15.347 18:47:15 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:15.347 18:47:15 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:15.347 18:47:15 spdkcli_raid -- spdkcli/raid.sh@20 -- # run_spdk_tgt 00:16:15.347 18:47:15 spdkcli_raid -- spdkcli/common.sh@27 -- # spdk_tgt_pid=101920 00:16:15.347 18:47:15 spdkcli_raid -- spdkcli/common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:16:15.347 18:47:15 spdkcli_raid -- spdkcli/common.sh@28 -- # waitforlisten 101920 00:16:15.347 18:47:15 spdkcli_raid -- common/autotest_common.sh@835 -- # '[' -z 101920 ']' 00:16:15.347 18:47:15 spdkcli_raid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:15.347 18:47:15 spdkcli_raid -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:15.347 18:47:15 spdkcli_raid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:15.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:15.347 18:47:15 spdkcli_raid -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:15.347 18:47:15 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:15.347 [2024-12-15 18:47:15.756284] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:16:15.347 [2024-12-15 18:47:15.756517] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101920 ] 00:16:15.607 [2024-12-15 18:47:15.935189] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:15.607 [2024-12-15 18:47:15.964300] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:15.607 [2024-12-15 18:47:15.964380] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:16:16.176 18:47:16 spdkcli_raid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:16.176 18:47:16 spdkcli_raid -- common/autotest_common.sh@868 -- # return 0 00:16:16.176 18:47:16 spdkcli_raid -- spdkcli/raid.sh@21 -- # timing_exit run_spdk_tgt 00:16:16.176 18:47:16 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:16.176 18:47:16 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:16.435 18:47:16 spdkcli_raid -- spdkcli/raid.sh@23 -- # timing_enter spdkcli_create_malloc 00:16:16.435 18:47:16 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:16.435 18:47:16 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:16.435 18:47:16 spdkcli_raid -- spdkcli/raid.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 8 512 Malloc1'\'' '\''Malloc1'\'' True 00:16:16.435 '\''/bdevs/malloc create 8 512 Malloc2'\'' '\''Malloc2'\'' True 00:16:16.435 ' 00:16:17.814 Executing command: ['/bdevs/malloc create 8 512 Malloc1', 'Malloc1', True] 00:16:17.815 Executing command: ['/bdevs/malloc create 8 512 Malloc2', 'Malloc2', True] 00:16:17.815 18:47:18 spdkcli_raid -- spdkcli/raid.sh@27 -- # timing_exit spdkcli_create_malloc 00:16:17.815 18:47:18 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:17.815 18:47:18 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:18.073 18:47:18 spdkcli_raid -- spdkcli/raid.sh@29 -- # timing_enter spdkcli_create_raid 00:16:18.073 18:47:18 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:18.073 18:47:18 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:18.073 18:47:18 spdkcli_raid -- spdkcli/raid.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4'\'' '\''testraid'\'' True 00:16:18.073 ' 00:16:19.012 Executing command: ['/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4', 'testraid', True] 00:16:19.012 18:47:19 spdkcli_raid -- spdkcli/raid.sh@32 -- # timing_exit spdkcli_create_raid 00:16:19.012 18:47:19 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:19.012 18:47:19 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:19.270 18:47:19 spdkcli_raid -- spdkcli/raid.sh@34 -- # timing_enter spdkcli_check_match 00:16:19.270 18:47:19 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:19.270 18:47:19 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:19.270 18:47:19 spdkcli_raid -- spdkcli/raid.sh@35 -- # check_match 00:16:19.270 18:47:19 spdkcli_raid -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /bdevs 00:16:19.839 18:47:19 spdkcli_raid -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test.match 00:16:19.839 18:47:20 spdkcli_raid -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test 00:16:19.839 18:47:20 spdkcli_raid -- spdkcli/raid.sh@36 -- # timing_exit spdkcli_check_match 00:16:19.839 18:47:20 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:19.839 18:47:20 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:19.839 18:47:20 spdkcli_raid -- spdkcli/raid.sh@38 -- # timing_enter spdkcli_delete_raid 00:16:19.839 18:47:20 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:19.839 18:47:20 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:19.839 18:47:20 spdkcli_raid -- spdkcli/raid.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume delete testraid'\'' '\'''\'' True 00:16:19.839 ' 00:16:20.777 Executing command: ['/bdevs/raid_volume delete testraid', '', True] 00:16:20.777 18:47:21 spdkcli_raid -- spdkcli/raid.sh@41 -- # timing_exit spdkcli_delete_raid 00:16:20.777 18:47:21 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:20.777 18:47:21 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:21.036 18:47:21 spdkcli_raid -- spdkcli/raid.sh@43 -- # timing_enter spdkcli_delete_malloc 00:16:21.036 18:47:21 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:21.036 18:47:21 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:21.036 18:47:21 spdkcli_raid -- spdkcli/raid.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc delete Malloc1'\'' '\'''\'' True 00:16:21.036 '\''/bdevs/malloc delete Malloc2'\'' '\'''\'' True 00:16:21.036 ' 00:16:22.416 Executing command: ['/bdevs/malloc delete Malloc1', '', True] 00:16:22.416 Executing command: ['/bdevs/malloc delete Malloc2', '', True] 00:16:22.416 18:47:22 spdkcli_raid -- spdkcli/raid.sh@47 -- # timing_exit spdkcli_delete_malloc 00:16:22.416 18:47:22 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:22.416 18:47:22 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:22.416 18:47:22 spdkcli_raid -- spdkcli/raid.sh@49 -- # killprocess 101920 00:16:22.416 18:47:22 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 101920 ']' 00:16:22.416 18:47:22 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 101920 00:16:22.416 18:47:22 spdkcli_raid -- common/autotest_common.sh@959 -- # uname 00:16:22.416 18:47:22 spdkcli_raid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:22.416 18:47:22 spdkcli_raid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 101920 00:16:22.416 18:47:22 spdkcli_raid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:22.416 18:47:22 spdkcli_raid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:22.416 18:47:22 spdkcli_raid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 101920' 00:16:22.416 killing process with pid 101920 00:16:22.416 18:47:22 spdkcli_raid -- common/autotest_common.sh@973 -- # kill 101920 00:16:22.416 18:47:22 spdkcli_raid -- common/autotest_common.sh@978 -- # wait 101920 00:16:22.985 18:47:23 spdkcli_raid -- spdkcli/raid.sh@1 -- # cleanup 00:16:22.985 18:47:23 spdkcli_raid -- spdkcli/common.sh@10 -- # '[' -n 101920 ']' 00:16:22.985 18:47:23 spdkcli_raid -- spdkcli/common.sh@11 -- # killprocess 101920 00:16:22.986 18:47:23 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 101920 ']' 00:16:22.986 18:47:23 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 101920 00:16:22.986 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (101920) - No such process 00:16:22.986 18:47:23 spdkcli_raid -- common/autotest_common.sh@981 -- # echo 'Process with pid 101920 is not found' 00:16:22.986 Process with pid 101920 is not found 00:16:22.986 18:47:23 spdkcli_raid -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:16:22.986 18:47:23 spdkcli_raid -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:16:22.986 18:47:23 spdkcli_raid -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:16:22.986 18:47:23 spdkcli_raid -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_raid.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:16:22.986 00:16:22.986 real 0m7.769s 00:16:22.986 user 0m16.350s 00:16:22.986 sys 0m1.134s 00:16:22.986 18:47:23 spdkcli_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:22.986 18:47:23 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:22.986 ************************************ 00:16:22.986 END TEST spdkcli_raid 00:16:22.986 ************************************ 00:16:22.986 18:47:23 -- spdk/autotest.sh@191 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:16:22.986 18:47:23 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:22.986 18:47:23 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:22.986 18:47:23 -- common/autotest_common.sh@10 -- # set +x 00:16:22.986 ************************************ 00:16:22.986 START TEST blockdev_raid5f 00:16:22.986 ************************************ 00:16:22.986 18:47:23 blockdev_raid5f -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:16:22.986 * Looking for test storage... 00:16:22.986 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:16:22.986 18:47:23 blockdev_raid5f -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:22.986 18:47:23 blockdev_raid5f -- common/autotest_common.sh@1711 -- # lcov --version 00:16:22.986 18:47:23 blockdev_raid5f -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:23.246 18:47:23 blockdev_raid5f -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:23.246 18:47:23 blockdev_raid5f -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:23.246 18:47:23 blockdev_raid5f -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:23.246 18:47:23 blockdev_raid5f -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:23.246 18:47:23 blockdev_raid5f -- scripts/common.sh@336 -- # IFS=.-: 00:16:23.246 18:47:23 blockdev_raid5f -- scripts/common.sh@336 -- # read -ra ver1 00:16:23.246 18:47:23 blockdev_raid5f -- scripts/common.sh@337 -- # IFS=.-: 00:16:23.246 18:47:23 blockdev_raid5f -- scripts/common.sh@337 -- # read -ra ver2 00:16:23.246 18:47:23 blockdev_raid5f -- scripts/common.sh@338 -- # local 'op=<' 00:16:23.246 18:47:23 blockdev_raid5f -- scripts/common.sh@340 -- # ver1_l=2 00:16:23.246 18:47:23 blockdev_raid5f -- scripts/common.sh@341 -- # ver2_l=1 00:16:23.246 18:47:23 blockdev_raid5f -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:23.246 18:47:23 blockdev_raid5f -- scripts/common.sh@344 -- # case "$op" in 00:16:23.246 18:47:23 blockdev_raid5f -- scripts/common.sh@345 -- # : 1 00:16:23.246 18:47:23 blockdev_raid5f -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:23.246 18:47:23 blockdev_raid5f -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:23.246 18:47:23 blockdev_raid5f -- scripts/common.sh@365 -- # decimal 1 00:16:23.246 18:47:23 blockdev_raid5f -- scripts/common.sh@353 -- # local d=1 00:16:23.246 18:47:23 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:23.246 18:47:23 blockdev_raid5f -- scripts/common.sh@355 -- # echo 1 00:16:23.246 18:47:23 blockdev_raid5f -- scripts/common.sh@365 -- # ver1[v]=1 00:16:23.246 18:47:23 blockdev_raid5f -- scripts/common.sh@366 -- # decimal 2 00:16:23.246 18:47:23 blockdev_raid5f -- scripts/common.sh@353 -- # local d=2 00:16:23.246 18:47:23 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:23.246 18:47:23 blockdev_raid5f -- scripts/common.sh@355 -- # echo 2 00:16:23.246 18:47:23 blockdev_raid5f -- scripts/common.sh@366 -- # ver2[v]=2 00:16:23.246 18:47:23 blockdev_raid5f -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:23.246 18:47:23 blockdev_raid5f -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:23.246 18:47:23 blockdev_raid5f -- scripts/common.sh@368 -- # return 0 00:16:23.246 18:47:23 blockdev_raid5f -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:23.246 18:47:23 blockdev_raid5f -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:23.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:23.246 --rc genhtml_branch_coverage=1 00:16:23.246 --rc genhtml_function_coverage=1 00:16:23.246 --rc genhtml_legend=1 00:16:23.246 --rc geninfo_all_blocks=1 00:16:23.246 --rc geninfo_unexecuted_blocks=1 00:16:23.246 00:16:23.246 ' 00:16:23.246 18:47:23 blockdev_raid5f -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:23.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:23.246 --rc genhtml_branch_coverage=1 00:16:23.246 --rc genhtml_function_coverage=1 00:16:23.246 --rc genhtml_legend=1 00:16:23.246 --rc geninfo_all_blocks=1 00:16:23.246 --rc geninfo_unexecuted_blocks=1 00:16:23.246 00:16:23.246 ' 00:16:23.246 18:47:23 blockdev_raid5f -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:23.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:23.246 --rc genhtml_branch_coverage=1 00:16:23.246 --rc genhtml_function_coverage=1 00:16:23.246 --rc genhtml_legend=1 00:16:23.246 --rc geninfo_all_blocks=1 00:16:23.246 --rc geninfo_unexecuted_blocks=1 00:16:23.246 00:16:23.246 ' 00:16:23.246 18:47:23 blockdev_raid5f -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:23.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:23.246 --rc genhtml_branch_coverage=1 00:16:23.246 --rc genhtml_function_coverage=1 00:16:23.246 --rc genhtml_legend=1 00:16:23.246 --rc geninfo_all_blocks=1 00:16:23.246 --rc geninfo_unexecuted_blocks=1 00:16:23.246 00:16:23.246 ' 00:16:23.246 18:47:23 blockdev_raid5f -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:16:23.246 18:47:23 blockdev_raid5f -- bdev/nbd_common.sh@6 -- # set -e 00:16:23.246 18:47:23 blockdev_raid5f -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:16:23.246 18:47:23 blockdev_raid5f -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:16:23.246 18:47:23 blockdev_raid5f -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:16:23.246 18:47:23 blockdev_raid5f -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:16:23.246 18:47:23 blockdev_raid5f -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:16:23.246 18:47:23 blockdev_raid5f -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:16:23.246 18:47:23 blockdev_raid5f -- bdev/blockdev.sh@20 -- # : 00:16:23.246 18:47:23 blockdev_raid5f -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:16:23.246 18:47:23 blockdev_raid5f -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:16:23.246 18:47:23 blockdev_raid5f -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:16:23.246 18:47:23 blockdev_raid5f -- bdev/blockdev.sh@711 -- # uname -s 00:16:23.246 18:47:23 blockdev_raid5f -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:16:23.246 18:47:23 blockdev_raid5f -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:16:23.246 18:47:23 blockdev_raid5f -- bdev/blockdev.sh@719 -- # test_type=raid5f 00:16:23.246 18:47:23 blockdev_raid5f -- bdev/blockdev.sh@720 -- # crypto_device= 00:16:23.246 18:47:23 blockdev_raid5f -- bdev/blockdev.sh@721 -- # dek= 00:16:23.246 18:47:23 blockdev_raid5f -- bdev/blockdev.sh@722 -- # env_ctx= 00:16:23.246 18:47:23 blockdev_raid5f -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:16:23.246 18:47:23 blockdev_raid5f -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:16:23.246 18:47:23 blockdev_raid5f -- bdev/blockdev.sh@727 -- # [[ raid5f == bdev ]] 00:16:23.246 18:47:23 blockdev_raid5f -- bdev/blockdev.sh@727 -- # [[ raid5f == crypto_* ]] 00:16:23.246 18:47:23 blockdev_raid5f -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:16:23.246 18:47:23 blockdev_raid5f -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=102178 00:16:23.246 18:47:23 blockdev_raid5f -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:16:23.246 18:47:23 blockdev_raid5f -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:16:23.246 18:47:23 blockdev_raid5f -- bdev/blockdev.sh@49 -- # waitforlisten 102178 00:16:23.246 18:47:23 blockdev_raid5f -- common/autotest_common.sh@835 -- # '[' -z 102178 ']' 00:16:23.246 18:47:23 blockdev_raid5f -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:23.246 18:47:23 blockdev_raid5f -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:23.246 18:47:23 blockdev_raid5f -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:23.246 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:23.246 18:47:23 blockdev_raid5f -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:23.246 18:47:23 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:23.246 [2024-12-15 18:47:23.574042] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:16:23.246 [2024-12-15 18:47:23.574284] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid102178 ] 00:16:23.506 [2024-12-15 18:47:23.751294] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:23.506 [2024-12-15 18:47:23.777067] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:24.076 18:47:24 blockdev_raid5f -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:24.076 18:47:24 blockdev_raid5f -- common/autotest_common.sh@868 -- # return 0 00:16:24.076 18:47:24 blockdev_raid5f -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:16:24.076 18:47:24 blockdev_raid5f -- bdev/blockdev.sh@763 -- # setup_raid5f_conf 00:16:24.076 18:47:24 blockdev_raid5f -- bdev/blockdev.sh@279 -- # rpc_cmd 00:16:24.076 18:47:24 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.076 18:47:24 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:24.076 Malloc0 00:16:24.076 Malloc1 00:16:24.076 Malloc2 00:16:24.076 18:47:24 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.076 18:47:24 blockdev_raid5f -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:16:24.076 18:47:24 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.076 18:47:24 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:24.076 18:47:24 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.076 18:47:24 blockdev_raid5f -- bdev/blockdev.sh@777 -- # cat 00:16:24.076 18:47:24 blockdev_raid5f -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:16:24.076 18:47:24 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.076 18:47:24 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:24.076 18:47:24 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.076 18:47:24 blockdev_raid5f -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:16:24.076 18:47:24 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.076 18:47:24 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:24.076 18:47:24 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.076 18:47:24 blockdev_raid5f -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:16:24.076 18:47:24 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.076 18:47:24 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:24.076 18:47:24 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.336 18:47:24 blockdev_raid5f -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:16:24.336 18:47:24 blockdev_raid5f -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:16:24.336 18:47:24 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.336 18:47:24 blockdev_raid5f -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:16:24.336 18:47:24 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:24.336 18:47:24 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.336 18:47:24 blockdev_raid5f -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:16:24.336 18:47:24 blockdev_raid5f -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "79ea60c9-a8d9-427b-8fb2-29de12edbb2c"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "79ea60c9-a8d9-427b-8fb2-29de12edbb2c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "79ea60c9-a8d9-427b-8fb2-29de12edbb2c",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "8caee553-dc03-4433-bd64-bfd829c11d0d",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "809fe2b2-d549-420d-a5ad-4da4bbc56848",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "ed15b6f2-db7c-4487-a5ae-e3399842fb4d",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:16:24.336 18:47:24 blockdev_raid5f -- bdev/blockdev.sh@786 -- # jq -r .name 00:16:24.336 18:47:24 blockdev_raid5f -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:16:24.336 18:47:24 blockdev_raid5f -- bdev/blockdev.sh@789 -- # hello_world_bdev=raid5f 00:16:24.336 18:47:24 blockdev_raid5f -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:16:24.336 18:47:24 blockdev_raid5f -- bdev/blockdev.sh@791 -- # killprocess 102178 00:16:24.336 18:47:24 blockdev_raid5f -- common/autotest_common.sh@954 -- # '[' -z 102178 ']' 00:16:24.336 18:47:24 blockdev_raid5f -- common/autotest_common.sh@958 -- # kill -0 102178 00:16:24.336 18:47:24 blockdev_raid5f -- common/autotest_common.sh@959 -- # uname 00:16:24.336 18:47:24 blockdev_raid5f -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:24.336 18:47:24 blockdev_raid5f -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 102178 00:16:24.336 killing process with pid 102178 00:16:24.336 18:47:24 blockdev_raid5f -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:24.336 18:47:24 blockdev_raid5f -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:24.336 18:47:24 blockdev_raid5f -- common/autotest_common.sh@972 -- # echo 'killing process with pid 102178' 00:16:24.336 18:47:24 blockdev_raid5f -- common/autotest_common.sh@973 -- # kill 102178 00:16:24.336 18:47:24 blockdev_raid5f -- common/autotest_common.sh@978 -- # wait 102178 00:16:24.905 18:47:25 blockdev_raid5f -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:16:24.905 18:47:25 blockdev_raid5f -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:16:24.905 18:47:25 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:16:24.905 18:47:25 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:24.905 18:47:25 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:24.905 ************************************ 00:16:24.905 START TEST bdev_hello_world 00:16:24.905 ************************************ 00:16:24.905 18:47:25 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:16:24.905 [2024-12-15 18:47:25.145234] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:16:24.905 [2024-12-15 18:47:25.145377] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid102217 ] 00:16:24.905 [2024-12-15 18:47:25.327568] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:25.165 [2024-12-15 18:47:25.356325] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:25.165 [2024-12-15 18:47:25.534355] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:16:25.165 [2024-12-15 18:47:25.534408] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:16:25.165 [2024-12-15 18:47:25.534423] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:16:25.165 [2024-12-15 18:47:25.534720] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:16:25.165 [2024-12-15 18:47:25.534863] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:16:25.165 [2024-12-15 18:47:25.534879] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:16:25.165 [2024-12-15 18:47:25.534924] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:16:25.165 00:16:25.165 [2024-12-15 18:47:25.534946] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:16:25.425 00:16:25.425 real 0m0.693s 00:16:25.425 user 0m0.373s 00:16:25.425 sys 0m0.214s 00:16:25.425 ************************************ 00:16:25.425 END TEST bdev_hello_world 00:16:25.425 ************************************ 00:16:25.425 18:47:25 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:25.425 18:47:25 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:16:25.425 18:47:25 blockdev_raid5f -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:16:25.425 18:47:25 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:25.425 18:47:25 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:25.425 18:47:25 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:25.425 ************************************ 00:16:25.425 START TEST bdev_bounds 00:16:25.425 ************************************ 00:16:25.425 18:47:25 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:16:25.425 18:47:25 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=102243 00:16:25.425 18:47:25 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:16:25.425 18:47:25 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:16:25.425 18:47:25 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 102243' 00:16:25.425 Process bdevio pid: 102243 00:16:25.425 18:47:25 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 102243 00:16:25.425 18:47:25 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 102243 ']' 00:16:25.425 18:47:25 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:25.425 18:47:25 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:25.425 18:47:25 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:25.425 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:25.425 18:47:25 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:25.425 18:47:25 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:16:25.694 [2024-12-15 18:47:25.932307] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:16:25.694 [2024-12-15 18:47:25.932452] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid102243 ] 00:16:25.694 [2024-12-15 18:47:26.109514] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:25.954 [2024-12-15 18:47:26.139067] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:16:25.954 [2024-12-15 18:47:26.139176] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:25.954 [2024-12-15 18:47:26.139279] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:16:26.522 18:47:26 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:26.522 18:47:26 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:16:26.522 18:47:26 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:16:26.522 I/O targets: 00:16:26.522 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:16:26.522 00:16:26.522 00:16:26.522 CUnit - A unit testing framework for C - Version 2.1-3 00:16:26.522 http://cunit.sourceforge.net/ 00:16:26.522 00:16:26.522 00:16:26.522 Suite: bdevio tests on: raid5f 00:16:26.522 Test: blockdev write read block ...passed 00:16:26.522 Test: blockdev write zeroes read block ...passed 00:16:26.522 Test: blockdev write zeroes read no split ...passed 00:16:26.522 Test: blockdev write zeroes read split ...passed 00:16:26.782 Test: blockdev write zeroes read split partial ...passed 00:16:26.782 Test: blockdev reset ...passed 00:16:26.782 Test: blockdev write read 8 blocks ...passed 00:16:26.782 Test: blockdev write read size > 128k ...passed 00:16:26.782 Test: blockdev write read invalid size ...passed 00:16:26.782 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:26.782 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:26.782 Test: blockdev write read max offset ...passed 00:16:26.782 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:26.782 Test: blockdev writev readv 8 blocks ...passed 00:16:26.782 Test: blockdev writev readv 30 x 1block ...passed 00:16:26.782 Test: blockdev writev readv block ...passed 00:16:26.782 Test: blockdev writev readv size > 128k ...passed 00:16:26.782 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:26.782 Test: blockdev comparev and writev ...passed 00:16:26.782 Test: blockdev nvme passthru rw ...passed 00:16:26.782 Test: blockdev nvme passthru vendor specific ...passed 00:16:26.782 Test: blockdev nvme admin passthru ...passed 00:16:26.782 Test: blockdev copy ...passed 00:16:26.782 00:16:26.782 Run Summary: Type Total Ran Passed Failed Inactive 00:16:26.782 suites 1 1 n/a 0 0 00:16:26.782 tests 23 23 23 0 0 00:16:26.782 asserts 130 130 130 0 n/a 00:16:26.782 00:16:26.782 Elapsed time = 0.342 seconds 00:16:26.782 0 00:16:26.782 18:47:27 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 102243 00:16:26.782 18:47:27 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 102243 ']' 00:16:26.782 18:47:27 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 102243 00:16:26.782 18:47:27 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:16:26.782 18:47:27 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:26.782 18:47:27 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 102243 00:16:26.782 killing process with pid 102243 00:16:26.782 18:47:27 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:26.782 18:47:27 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:26.782 18:47:27 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 102243' 00:16:26.782 18:47:27 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@973 -- # kill 102243 00:16:26.782 18:47:27 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@978 -- # wait 102243 00:16:27.042 ************************************ 00:16:27.042 END TEST bdev_bounds 00:16:27.042 ************************************ 00:16:27.042 18:47:27 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:16:27.042 00:16:27.042 real 0m1.479s 00:16:27.042 user 0m3.539s 00:16:27.042 sys 0m0.378s 00:16:27.042 18:47:27 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:27.042 18:47:27 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:16:27.042 18:47:27 blockdev_raid5f -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:16:27.042 18:47:27 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:16:27.042 18:47:27 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:27.042 18:47:27 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:27.042 ************************************ 00:16:27.042 START TEST bdev_nbd 00:16:27.042 ************************************ 00:16:27.042 18:47:27 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:16:27.042 18:47:27 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:16:27.042 18:47:27 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:16:27.042 18:47:27 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:27.042 18:47:27 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:16:27.042 18:47:27 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('raid5f') 00:16:27.042 18:47:27 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:16:27.042 18:47:27 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1 00:16:27.042 18:47:27 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:16:27.042 18:47:27 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:16:27.042 18:47:27 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:16:27.042 18:47:27 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1 00:16:27.042 18:47:27 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0') 00:16:27.042 18:47:27 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:16:27.042 18:47:27 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('raid5f') 00:16:27.042 18:47:27 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:16:27.042 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:16:27.042 18:47:27 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=102291 00:16:27.042 18:47:27 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:16:27.042 18:47:27 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:16:27.042 18:47:27 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 102291 /var/tmp/spdk-nbd.sock 00:16:27.042 18:47:27 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 102291 ']' 00:16:27.042 18:47:27 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:16:27.042 18:47:27 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:27.042 18:47:27 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:16:27.042 18:47:27 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:27.042 18:47:27 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:16:27.302 [2024-12-15 18:47:27.494197] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:16:27.302 [2024-12-15 18:47:27.494452] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:27.302 [2024-12-15 18:47:27.671487] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:27.302 [2024-12-15 18:47:27.698277] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:27.871 18:47:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:27.871 18:47:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:16:27.871 18:47:28 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:16:27.871 18:47:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:27.871 18:47:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:16:27.871 18:47:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:16:27.871 18:47:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:16:27.871 18:47:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:27.871 18:47:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:16:27.871 18:47:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:16:27.871 18:47:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:16:27.871 18:47:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:16:27.871 18:47:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:16:27.871 18:47:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:16:28.130 18:47:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:16:28.130 18:47:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:16:28.130 18:47:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:16:28.130 18:47:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:16:28.130 18:47:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:28.130 18:47:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:16:28.130 18:47:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:28.130 18:47:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:28.130 18:47:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:28.130 18:47:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:16:28.130 18:47:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:28.130 18:47:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:28.130 18:47:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:28.130 1+0 records in 00:16:28.130 1+0 records out 00:16:28.130 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000594416 s, 6.9 MB/s 00:16:28.130 18:47:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:28.130 18:47:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:16:28.130 18:47:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:28.130 18:47:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:28.130 18:47:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:16:28.130 18:47:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:16:28.130 18:47:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:16:28.130 18:47:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:28.390 18:47:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:16:28.390 { 00:16:28.390 "nbd_device": "/dev/nbd0", 00:16:28.390 "bdev_name": "raid5f" 00:16:28.390 } 00:16:28.390 ]' 00:16:28.390 18:47:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:16:28.390 18:47:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:16:28.390 { 00:16:28.390 "nbd_device": "/dev/nbd0", 00:16:28.390 "bdev_name": "raid5f" 00:16:28.390 } 00:16:28.390 ]' 00:16:28.390 18:47:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:16:28.390 18:47:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:16:28.390 18:47:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:28.390 18:47:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:28.390 18:47:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:28.390 18:47:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:16:28.390 18:47:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:28.390 18:47:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:16:28.649 18:47:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:28.649 18:47:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:28.649 18:47:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:28.649 18:47:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:28.649 18:47:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:28.649 18:47:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:28.649 18:47:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:28.649 18:47:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:28.649 18:47:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:16:28.649 18:47:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:28.649 18:47:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:28.909 18:47:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:16:28.909 18:47:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:16:28.909 18:47:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:28.909 18:47:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:16:28.909 18:47:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:16:28.909 18:47:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:28.909 18:47:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:16:28.909 18:47:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:16:28.909 18:47:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:16:28.909 18:47:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:16:28.909 18:47:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:16:28.909 18:47:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:16:28.909 18:47:29 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:16:28.909 18:47:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:28.909 18:47:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:16:28.909 18:47:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:16:28.909 18:47:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:16:28.909 18:47:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:16:28.909 18:47:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:16:28.909 18:47:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:28.909 18:47:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:16:28.909 18:47:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:28.909 18:47:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:28.909 18:47:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:28.909 18:47:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:16:28.909 18:47:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:28.909 18:47:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:28.909 18:47:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:16:29.168 /dev/nbd0 00:16:29.168 18:47:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:29.168 18:47:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:29.168 18:47:29 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:29.168 18:47:29 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:16:29.168 18:47:29 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:29.168 18:47:29 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:29.168 18:47:29 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:29.168 18:47:29 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:16:29.168 18:47:29 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:29.169 18:47:29 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:29.169 18:47:29 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:29.169 1+0 records in 00:16:29.169 1+0 records out 00:16:29.169 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000399304 s, 10.3 MB/s 00:16:29.169 18:47:29 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:29.169 18:47:29 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:16:29.169 18:47:29 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:29.169 18:47:29 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:29.169 18:47:29 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:16:29.169 18:47:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:29.169 18:47:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:29.169 18:47:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:16:29.169 18:47:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:29.169 18:47:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:29.428 18:47:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:16:29.428 { 00:16:29.428 "nbd_device": "/dev/nbd0", 00:16:29.428 "bdev_name": "raid5f" 00:16:29.428 } 00:16:29.428 ]' 00:16:29.428 18:47:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:16:29.428 { 00:16:29.428 "nbd_device": "/dev/nbd0", 00:16:29.428 "bdev_name": "raid5f" 00:16:29.428 } 00:16:29.428 ]' 00:16:29.428 18:47:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:29.428 18:47:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:16:29.428 18:47:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:29.428 18:47:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:16:29.428 18:47:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:16:29.428 18:47:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:16:29.428 18:47:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:16:29.428 18:47:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:16:29.428 18:47:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:16:29.428 18:47:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:16:29.428 18:47:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:16:29.428 18:47:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:16:29.428 18:47:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:16:29.428 18:47:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:16:29.428 18:47:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:16:29.428 256+0 records in 00:16:29.428 256+0 records out 00:16:29.428 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0122237 s, 85.8 MB/s 00:16:29.428 18:47:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:29.428 18:47:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:16:29.688 256+0 records in 00:16:29.688 256+0 records out 00:16:29.688 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0292011 s, 35.9 MB/s 00:16:29.688 18:47:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:16:29.688 18:47:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:16:29.688 18:47:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:16:29.688 18:47:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:16:29.688 18:47:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:16:29.688 18:47:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:16:29.688 18:47:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:16:29.688 18:47:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:29.688 18:47:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:16:29.688 18:47:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:16:29.688 18:47:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:16:29.688 18:47:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:29.688 18:47:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:29.688 18:47:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:29.688 18:47:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:16:29.688 18:47:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:29.688 18:47:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:16:29.688 18:47:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:29.688 18:47:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:29.688 18:47:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:29.688 18:47:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:29.688 18:47:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:29.688 18:47:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:29.948 18:47:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:29.948 18:47:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:29.948 18:47:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:16:29.948 18:47:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:29.948 18:47:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:29.948 18:47:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:16:29.948 18:47:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:16:29.948 18:47:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:30.207 18:47:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:16:30.207 18:47:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:16:30.207 18:47:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:30.207 18:47:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:16:30.207 18:47:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:16:30.207 18:47:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:16:30.207 18:47:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:16:30.207 18:47:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:16:30.207 18:47:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:16:30.207 18:47:30 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:16:30.207 18:47:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:30.207 18:47:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:16:30.207 18:47:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:16:30.207 malloc_lvol_verify 00:16:30.207 18:47:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:16:30.469 e711a73a-aed5-4eb9-afef-c04a5ab92238 00:16:30.469 18:47:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:16:30.728 d856ca26-7767-4700-bd9c-cdfccd7ac18f 00:16:30.728 18:47:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:16:30.987 /dev/nbd0 00:16:30.987 18:47:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:16:30.987 18:47:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:16:30.987 18:47:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:16:30.987 18:47:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:16:30.987 18:47:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:16:30.987 mke2fs 1.47.0 (5-Feb-2023) 00:16:30.987 Discarding device blocks: 0/4096 done 00:16:30.987 Creating filesystem with 4096 1k blocks and 1024 inodes 00:16:30.987 00:16:30.987 Allocating group tables: 0/1 done 00:16:30.987 Writing inode tables: 0/1 done 00:16:30.987 Creating journal (1024 blocks): done 00:16:30.987 Writing superblocks and filesystem accounting information: 0/1 done 00:16:30.987 00:16:30.987 18:47:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:16:30.987 18:47:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:30.987 18:47:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:30.987 18:47:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:30.987 18:47:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:16:30.987 18:47:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:30.987 18:47:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:16:31.247 18:47:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:31.247 18:47:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:31.247 18:47:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:31.247 18:47:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:31.247 18:47:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:31.247 18:47:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:31.247 18:47:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:31.247 18:47:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:31.247 18:47:31 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 102291 00:16:31.247 18:47:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 102291 ']' 00:16:31.247 18:47:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 102291 00:16:31.247 18:47:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:16:31.247 18:47:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:31.247 18:47:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 102291 00:16:31.247 killing process with pid 102291 00:16:31.247 18:47:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:31.247 18:47:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:31.247 18:47:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 102291' 00:16:31.247 18:47:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@973 -- # kill 102291 00:16:31.247 18:47:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@978 -- # wait 102291 00:16:31.507 ************************************ 00:16:31.507 END TEST bdev_nbd 00:16:31.507 ************************************ 00:16:31.507 18:47:31 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:16:31.507 00:16:31.507 real 0m4.343s 00:16:31.507 user 0m6.233s 00:16:31.507 sys 0m1.354s 00:16:31.507 18:47:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:31.507 18:47:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:16:31.507 18:47:31 blockdev_raid5f -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:16:31.507 18:47:31 blockdev_raid5f -- bdev/blockdev.sh@801 -- # '[' raid5f = nvme ']' 00:16:31.507 18:47:31 blockdev_raid5f -- bdev/blockdev.sh@801 -- # '[' raid5f = gpt ']' 00:16:31.507 18:47:31 blockdev_raid5f -- bdev/blockdev.sh@805 -- # run_test bdev_fio fio_test_suite '' 00:16:31.507 18:47:31 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:31.507 18:47:31 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:31.507 18:47:31 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:31.507 ************************************ 00:16:31.507 START TEST bdev_fio 00:16:31.507 ************************************ 00:16:31.507 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:16:31.507 18:47:31 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:16:31.507 18:47:31 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:16:31.507 18:47:31 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:16:31.507 18:47:31 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:16:31.507 18:47:31 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:16:31.507 18:47:31 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:16:31.507 18:47:31 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:16:31.507 18:47:31 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:16:31.507 18:47:31 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:31.507 18:47:31 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:16:31.507 18:47:31 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:16:31.507 18:47:31 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:16:31.507 18:47:31 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:16:31.507 18:47:31 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:16:31.507 18:47:31 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:16:31.507 18:47:31 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:16:31.508 18:47:31 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:31.508 18:47:31 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:16:31.508 18:47:31 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:16:31.508 18:47:31 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:16:31.508 18:47:31 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:16:31.508 18:47:31 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:16:31.508 18:47:31 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:16:31.508 18:47:31 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:16:31.508 18:47:31 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:16:31.508 18:47:31 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid5f]' 00:16:31.508 18:47:31 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid5f 00:16:31.508 18:47:31 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:16:31.508 18:47:31 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:16:31.508 18:47:31 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:16:31.508 18:47:31 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:31.508 18:47:31 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:16:31.768 ************************************ 00:16:31.768 START TEST bdev_fio_rw_verify 00:16:31.768 ************************************ 00:16:31.768 18:47:31 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:16:31.768 18:47:31 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:16:31.768 18:47:31 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:16:31.768 18:47:31 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:31.768 18:47:31 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:16:31.768 18:47:31 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:31.768 18:47:31 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:16:31.768 18:47:31 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:16:31.768 18:47:31 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:16:31.768 18:47:31 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:31.768 18:47:31 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:16:31.768 18:47:31 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:16:31.768 18:47:32 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:31.768 18:47:32 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:31.768 18:47:32 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:16:31.768 18:47:32 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:31.768 18:47:32 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:16:31.768 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:31.768 fio-3.35 00:16:31.768 Starting 1 thread 00:16:44.005 00:16:44.005 job_raid5f: (groupid=0, jobs=1): err= 0: pid=102477: Sun Dec 15 18:47:42 2024 00:16:44.005 read: IOPS=12.1k, BW=47.1MiB/s (49.4MB/s)(471MiB/10001msec) 00:16:44.005 slat (usec): min=17, max=187, avg=20.14, stdev= 3.21 00:16:44.005 clat (usec): min=9, max=979, avg=133.81, stdev=50.16 00:16:44.005 lat (usec): min=28, max=1074, avg=153.95, stdev=51.07 00:16:44.005 clat percentiles (usec): 00:16:44.006 | 50.000th=[ 135], 99.000th=[ 227], 99.900th=[ 371], 99.990th=[ 766], 00:16:44.006 | 99.999th=[ 963] 00:16:44.006 write: IOPS=12.7k, BW=49.5MiB/s (51.9MB/s)(489MiB/9869msec); 0 zone resets 00:16:44.006 slat (usec): min=7, max=264, avg=16.90, stdev= 4.45 00:16:44.006 clat (usec): min=60, max=1992, avg=301.43, stdev=46.18 00:16:44.006 lat (usec): min=76, max=2034, avg=318.33, stdev=47.47 00:16:44.006 clat percentiles (usec): 00:16:44.006 | 50.000th=[ 306], 99.000th=[ 392], 99.900th=[ 660], 99.990th=[ 1450], 00:16:44.006 | 99.999th=[ 1958] 00:16:44.006 bw ( KiB/s): min=45544, max=54056, per=98.53%, avg=49966.74, stdev=2076.63, samples=19 00:16:44.006 iops : min=11386, max=13514, avg=12491.68, stdev=519.16, samples=19 00:16:44.006 lat (usec) : 10=0.01%, 20=0.01%, 50=0.01%, 100=14.76%, 250=40.79% 00:16:44.006 lat (usec) : 500=44.34%, 750=0.06%, 1000=0.02% 00:16:44.006 lat (msec) : 2=0.02% 00:16:44.006 cpu : usr=98.67%, sys=0.55%, ctx=24, majf=0, minf=13009 00:16:44.006 IO depths : 1=7.6%, 2=19.9%, 4=55.2%, 8=17.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:44.006 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:44.006 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:44.006 issued rwts: total=120696,125124,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:44.006 latency : target=0, window=0, percentile=100.00%, depth=8 00:16:44.006 00:16:44.006 Run status group 0 (all jobs): 00:16:44.006 READ: bw=47.1MiB/s (49.4MB/s), 47.1MiB/s-47.1MiB/s (49.4MB/s-49.4MB/s), io=471MiB (494MB), run=10001-10001msec 00:16:44.006 WRITE: bw=49.5MiB/s (51.9MB/s), 49.5MiB/s-49.5MiB/s (51.9MB/s-51.9MB/s), io=489MiB (513MB), run=9869-9869msec 00:16:44.006 ----------------------------------------------------- 00:16:44.006 Suppressions used: 00:16:44.006 count bytes template 00:16:44.006 1 7 /usr/src/fio/parse.c 00:16:44.006 677 64992 /usr/src/fio/iolog.c 00:16:44.006 1 8 libtcmalloc_minimal.so 00:16:44.006 1 904 libcrypto.so 00:16:44.006 ----------------------------------------------------- 00:16:44.006 00:16:44.006 00:16:44.006 real 0m11.244s 00:16:44.006 user 0m11.433s 00:16:44.006 sys 0m0.781s 00:16:44.006 18:47:43 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:44.006 18:47:43 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:16:44.006 ************************************ 00:16:44.006 END TEST bdev_fio_rw_verify 00:16:44.006 ************************************ 00:16:44.006 18:47:43 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:16:44.006 18:47:43 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:44.006 18:47:43 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:16:44.006 18:47:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:44.006 18:47:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:16:44.006 18:47:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:16:44.006 18:47:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:16:44.006 18:47:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:16:44.006 18:47:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:16:44.006 18:47:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:16:44.006 18:47:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:16:44.006 18:47:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:44.006 18:47:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:16:44.006 18:47:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:16:44.006 18:47:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:16:44.006 18:47:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:16:44.006 18:47:43 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:16:44.006 18:47:43 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "79ea60c9-a8d9-427b-8fb2-29de12edbb2c"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "79ea60c9-a8d9-427b-8fb2-29de12edbb2c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "79ea60c9-a8d9-427b-8fb2-29de12edbb2c",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "8caee553-dc03-4433-bd64-bfd829c11d0d",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "809fe2b2-d549-420d-a5ad-4da4bbc56848",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "ed15b6f2-db7c-4487-a5ae-e3399842fb4d",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:16:44.006 18:47:43 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:16:44.006 18:47:43 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:44.006 /home/vagrant/spdk_repo/spdk 00:16:44.006 18:47:43 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:16:44.006 18:47:43 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:16:44.006 18:47:43 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:16:44.006 00:16:44.006 real 0m11.549s 00:16:44.006 user 0m11.562s 00:16:44.006 sys 0m0.920s 00:16:44.006 18:47:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:44.006 18:47:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:16:44.006 ************************************ 00:16:44.006 END TEST bdev_fio 00:16:44.006 ************************************ 00:16:44.006 18:47:43 blockdev_raid5f -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:16:44.006 18:47:43 blockdev_raid5f -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:16:44.006 18:47:43 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:16:44.006 18:47:43 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:44.006 18:47:43 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:44.006 ************************************ 00:16:44.006 START TEST bdev_verify 00:16:44.006 ************************************ 00:16:44.006 18:47:43 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:16:44.006 [2024-12-15 18:47:43.522990] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:16:44.006 [2024-12-15 18:47:43.523142] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid102629 ] 00:16:44.006 [2024-12-15 18:47:43.698486] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:44.006 [2024-12-15 18:47:43.729102] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:44.006 [2024-12-15 18:47:43.729205] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:16:44.006 Running I/O for 5 seconds... 00:16:45.510 10618.00 IOPS, 41.48 MiB/s [2024-12-15T18:47:47.346Z] 10725.00 IOPS, 41.89 MiB/s [2024-12-15T18:47:48.285Z] 10696.00 IOPS, 41.78 MiB/s [2024-12-15T18:47:49.224Z] 10732.00 IOPS, 41.92 MiB/s [2024-12-15T18:47:49.224Z] 10731.00 IOPS, 41.92 MiB/s 00:16:48.783 Latency(us) 00:16:48.783 [2024-12-15T18:47:49.224Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:48.783 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:48.783 Verification LBA range: start 0x0 length 0x2000 00:16:48.783 raid5f : 5.02 4140.39 16.17 0.00 0.00 46519.36 250.41 32052.54 00:16:48.783 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:48.783 Verification LBA range: start 0x2000 length 0x2000 00:16:48.783 raid5f : 5.02 6593.96 25.76 0.00 0.00 29203.78 137.73 22093.36 00:16:48.783 [2024-12-15T18:47:49.224Z] =================================================================================================================== 00:16:48.783 [2024-12-15T18:47:49.224Z] Total : 10734.35 41.93 0.00 0.00 35885.11 137.73 32052.54 00:16:48.783 ************************************ 00:16:48.783 END TEST bdev_verify 00:16:48.783 ************************************ 00:16:48.783 00:16:48.783 real 0m5.745s 00:16:48.783 user 0m10.662s 00:16:48.783 sys 0m0.251s 00:16:48.783 18:47:49 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:48.783 18:47:49 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:16:49.043 18:47:49 blockdev_raid5f -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:16:49.043 18:47:49 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:16:49.043 18:47:49 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:49.043 18:47:49 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:49.043 ************************************ 00:16:49.043 START TEST bdev_verify_big_io 00:16:49.043 ************************************ 00:16:49.043 18:47:49 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:16:49.043 [2024-12-15 18:47:49.340119] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:16:49.043 [2024-12-15 18:47:49.340363] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid102711 ] 00:16:49.303 [2024-12-15 18:47:49.517626] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:49.303 [2024-12-15 18:47:49.548286] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:49.303 [2024-12-15 18:47:49.548408] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:16:49.562 Running I/O for 5 seconds... 00:16:51.439 633.00 IOPS, 39.56 MiB/s [2024-12-15T18:47:53.259Z] 761.00 IOPS, 47.56 MiB/s [2024-12-15T18:47:54.197Z] 802.33 IOPS, 50.15 MiB/s [2024-12-15T18:47:55.138Z] 777.00 IOPS, 48.56 MiB/s [2024-12-15T18:47:55.138Z] 787.00 IOPS, 49.19 MiB/s 00:16:54.697 Latency(us) 00:16:54.697 [2024-12-15T18:47:55.138Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:54.697 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:54.697 Verification LBA range: start 0x0 length 0x200 00:16:54.697 raid5f : 5.22 340.53 21.28 0.00 0.00 9293696.22 224.48 386462.07 00:16:54.697 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:54.697 Verification LBA range: start 0x200 length 0x200 00:16:54.697 raid5f : 5.27 457.56 28.60 0.00 0.00 7020352.33 157.40 304041.25 00:16:54.697 [2024-12-15T18:47:55.138Z] =================================================================================================================== 00:16:54.697 [2024-12-15T18:47:55.138Z] Total : 798.09 49.88 0.00 0.00 7985261.72 157.40 386462.07 00:16:54.956 00:16:54.956 real 0m6.004s 00:16:54.956 user 0m11.178s 00:16:54.956 sys 0m0.246s 00:16:54.956 ************************************ 00:16:54.956 END TEST bdev_verify_big_io 00:16:54.956 ************************************ 00:16:54.956 18:47:55 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:54.956 18:47:55 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:16:54.956 18:47:55 blockdev_raid5f -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:54.956 18:47:55 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:16:54.956 18:47:55 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:54.956 18:47:55 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:54.956 ************************************ 00:16:54.956 START TEST bdev_write_zeroes 00:16:54.956 ************************************ 00:16:54.956 18:47:55 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:55.216 [2024-12-15 18:47:55.433013] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:16:55.216 [2024-12-15 18:47:55.433169] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid102793 ] 00:16:55.216 [2024-12-15 18:47:55.610871] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:55.216 [2024-12-15 18:47:55.640576] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:55.476 Running I/O for 1 seconds... 00:16:56.415 29823.00 IOPS, 116.50 MiB/s 00:16:56.415 Latency(us) 00:16:56.415 [2024-12-15T18:47:56.856Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:56.415 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:56.415 raid5f : 1.01 29785.72 116.35 0.00 0.00 4285.94 1302.13 5838.14 00:16:56.415 [2024-12-15T18:47:56.856Z] =================================================================================================================== 00:16:56.415 [2024-12-15T18:47:56.856Z] Total : 29785.72 116.35 0.00 0.00 4285.94 1302.13 5838.14 00:16:56.676 00:16:56.676 real 0m1.721s 00:16:56.676 user 0m1.378s 00:16:56.676 sys 0m0.230s 00:16:56.676 18:47:57 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:56.676 18:47:57 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:16:56.676 ************************************ 00:16:56.676 END TEST bdev_write_zeroes 00:16:56.676 ************************************ 00:16:56.935 18:47:57 blockdev_raid5f -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:56.935 18:47:57 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:16:56.936 18:47:57 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:56.936 18:47:57 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:56.936 ************************************ 00:16:56.936 START TEST bdev_json_nonenclosed 00:16:56.936 ************************************ 00:16:56.936 18:47:57 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:56.936 [2024-12-15 18:47:57.229555] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:16:56.936 [2024-12-15 18:47:57.229731] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid102829 ] 00:16:57.196 [2024-12-15 18:47:57.409562] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:57.196 [2024-12-15 18:47:57.442607] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:57.196 [2024-12-15 18:47:57.442715] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:16:57.196 [2024-12-15 18:47:57.442734] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:16:57.196 [2024-12-15 18:47:57.442755] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:57.196 00:16:57.196 real 0m0.403s 00:16:57.196 user 0m0.165s 00:16:57.196 sys 0m0.134s 00:16:57.196 18:47:57 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:57.196 18:47:57 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:16:57.196 ************************************ 00:16:57.196 END TEST bdev_json_nonenclosed 00:16:57.196 ************************************ 00:16:57.196 18:47:57 blockdev_raid5f -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:57.196 18:47:57 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:16:57.196 18:47:57 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:57.196 18:47:57 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:57.196 ************************************ 00:16:57.196 START TEST bdev_json_nonarray 00:16:57.196 ************************************ 00:16:57.196 18:47:57 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:57.456 [2024-12-15 18:47:57.699327] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:16:57.456 [2024-12-15 18:47:57.699530] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid102855 ] 00:16:57.456 [2024-12-15 18:47:57.871618] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:57.716 [2024-12-15 18:47:57.903464] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:57.716 [2024-12-15 18:47:57.903688] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:16:57.716 [2024-12-15 18:47:57.903715] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:16:57.716 [2024-12-15 18:47:57.903728] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:57.716 00:16:57.716 real 0m0.382s 00:16:57.716 user 0m0.155s 00:16:57.716 sys 0m0.121s 00:16:57.716 18:47:57 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:57.716 18:47:57 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:16:57.716 ************************************ 00:16:57.716 END TEST bdev_json_nonarray 00:16:57.716 ************************************ 00:16:57.716 18:47:58 blockdev_raid5f -- bdev/blockdev.sh@824 -- # [[ raid5f == bdev ]] 00:16:57.716 18:47:58 blockdev_raid5f -- bdev/blockdev.sh@832 -- # [[ raid5f == gpt ]] 00:16:57.716 18:47:58 blockdev_raid5f -- bdev/blockdev.sh@836 -- # [[ raid5f == crypto_sw ]] 00:16:57.716 18:47:58 blockdev_raid5f -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:16:57.716 18:47:58 blockdev_raid5f -- bdev/blockdev.sh@849 -- # cleanup 00:16:57.716 18:47:58 blockdev_raid5f -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:16:57.716 18:47:58 blockdev_raid5f -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:16:57.716 18:47:58 blockdev_raid5f -- bdev/blockdev.sh@26 -- # [[ raid5f == rbd ]] 00:16:57.716 18:47:58 blockdev_raid5f -- bdev/blockdev.sh@30 -- # [[ raid5f == daos ]] 00:16:57.716 18:47:58 blockdev_raid5f -- bdev/blockdev.sh@34 -- # [[ raid5f = \g\p\t ]] 00:16:57.716 18:47:58 blockdev_raid5f -- bdev/blockdev.sh@40 -- # [[ raid5f == xnvme ]] 00:16:57.716 00:16:57.716 real 0m34.840s 00:16:57.716 user 0m47.180s 00:16:57.716 sys 0m4.920s 00:16:57.716 ************************************ 00:16:57.716 END TEST blockdev_raid5f 00:16:57.717 ************************************ 00:16:57.717 18:47:58 blockdev_raid5f -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:57.717 18:47:58 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:57.717 18:47:58 -- spdk/autotest.sh@194 -- # uname -s 00:16:57.717 18:47:58 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:16:57.717 18:47:58 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:16:57.717 18:47:58 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:16:57.717 18:47:58 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:16:57.717 18:47:58 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:16:57.717 18:47:58 -- spdk/autotest.sh@260 -- # timing_exit lib 00:16:57.717 18:47:58 -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:57.717 18:47:58 -- common/autotest_common.sh@10 -- # set +x 00:16:57.977 18:47:58 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:16:57.977 18:47:58 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:16:57.977 18:47:58 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:16:57.977 18:47:58 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:16:57.977 18:47:58 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:16:57.977 18:47:58 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:16:57.977 18:47:58 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:16:57.977 18:47:58 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:16:57.977 18:47:58 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:16:57.977 18:47:58 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:16:57.977 18:47:58 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:16:57.977 18:47:58 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:16:57.977 18:47:58 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:16:57.977 18:47:58 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:16:57.977 18:47:58 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:16:57.977 18:47:58 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:16:57.977 18:47:58 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:16:57.977 18:47:58 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:16:57.977 18:47:58 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:16:57.977 18:47:58 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:16:57.977 18:47:58 -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:57.977 18:47:58 -- common/autotest_common.sh@10 -- # set +x 00:16:57.977 18:47:58 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:16:57.977 18:47:58 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:16:57.977 18:47:58 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:16:57.977 18:47:58 -- common/autotest_common.sh@10 -- # set +x 00:17:00.515 INFO: APP EXITING 00:17:00.515 INFO: killing all VMs 00:17:00.515 INFO: killing vhost app 00:17:00.515 INFO: EXIT DONE 00:17:00.775 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:00.775 Waiting for block devices as requested 00:17:00.775 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:17:01.035 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:17:01.976 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:01.976 Cleaning 00:17:01.976 Removing: /var/run/dpdk/spdk0/config 00:17:01.976 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:17:01.976 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:17:01.976 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:17:01.976 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:17:01.976 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:17:01.976 Removing: /var/run/dpdk/spdk0/hugepage_info 00:17:01.976 Removing: /dev/shm/spdk_tgt_trace.pid70973 00:17:01.976 Removing: /var/run/dpdk/spdk0 00:17:01.976 Removing: /var/run/dpdk/spdk_pid100015 00:17:01.976 Removing: /var/run/dpdk/spdk_pid100940 00:17:01.976 Removing: /var/run/dpdk/spdk_pid101257 00:17:01.976 Removing: /var/run/dpdk/spdk_pid101920 00:17:01.976 Removing: /var/run/dpdk/spdk_pid102178 00:17:01.976 Removing: /var/run/dpdk/spdk_pid102217 00:17:01.976 Removing: /var/run/dpdk/spdk_pid102243 00:17:01.976 Removing: /var/run/dpdk/spdk_pid102467 00:17:01.976 Removing: /var/run/dpdk/spdk_pid102629 00:17:01.976 Removing: /var/run/dpdk/spdk_pid102711 00:17:01.976 Removing: /var/run/dpdk/spdk_pid102793 00:17:01.976 Removing: /var/run/dpdk/spdk_pid102829 00:17:01.976 Removing: /var/run/dpdk/spdk_pid102855 00:17:01.976 Removing: /var/run/dpdk/spdk_pid70804 00:17:01.976 Removing: /var/run/dpdk/spdk_pid70973 00:17:01.976 Removing: /var/run/dpdk/spdk_pid71175 00:17:01.976 Removing: /var/run/dpdk/spdk_pid71262 00:17:01.976 Removing: /var/run/dpdk/spdk_pid71291 00:17:01.976 Removing: /var/run/dpdk/spdk_pid71398 00:17:01.976 Removing: /var/run/dpdk/spdk_pid71415 00:17:01.976 Removing: /var/run/dpdk/spdk_pid71603 00:17:01.976 Removing: /var/run/dpdk/spdk_pid71682 00:17:01.976 Removing: /var/run/dpdk/spdk_pid71767 00:17:01.976 Removing: /var/run/dpdk/spdk_pid71856 00:17:01.976 Removing: /var/run/dpdk/spdk_pid71942 00:17:01.976 Removing: /var/run/dpdk/spdk_pid71976 00:17:01.976 Removing: /var/run/dpdk/spdk_pid72018 00:17:01.976 Removing: /var/run/dpdk/spdk_pid72083 00:17:01.976 Removing: /var/run/dpdk/spdk_pid72200 00:17:02.237 Removing: /var/run/dpdk/spdk_pid72625 00:17:02.237 Removing: /var/run/dpdk/spdk_pid72675 00:17:02.237 Removing: /var/run/dpdk/spdk_pid72728 00:17:02.237 Removing: /var/run/dpdk/spdk_pid72744 00:17:02.237 Removing: /var/run/dpdk/spdk_pid72807 00:17:02.237 Removing: /var/run/dpdk/spdk_pid72818 00:17:02.237 Removing: /var/run/dpdk/spdk_pid72898 00:17:02.237 Removing: /var/run/dpdk/spdk_pid72914 00:17:02.237 Removing: /var/run/dpdk/spdk_pid72956 00:17:02.237 Removing: /var/run/dpdk/spdk_pid72974 00:17:02.237 Removing: /var/run/dpdk/spdk_pid73022 00:17:02.237 Removing: /var/run/dpdk/spdk_pid73034 00:17:02.237 Removing: /var/run/dpdk/spdk_pid73174 00:17:02.237 Removing: /var/run/dpdk/spdk_pid73211 00:17:02.237 Removing: /var/run/dpdk/spdk_pid73294 00:17:02.237 Removing: /var/run/dpdk/spdk_pid74485 00:17:02.237 Removing: /var/run/dpdk/spdk_pid74691 00:17:02.237 Removing: /var/run/dpdk/spdk_pid74820 00:17:02.237 Removing: /var/run/dpdk/spdk_pid75430 00:17:02.237 Removing: /var/run/dpdk/spdk_pid75631 00:17:02.237 Removing: /var/run/dpdk/spdk_pid75760 00:17:02.237 Removing: /var/run/dpdk/spdk_pid76370 00:17:02.237 Removing: /var/run/dpdk/spdk_pid76689 00:17:02.237 Removing: /var/run/dpdk/spdk_pid76824 00:17:02.237 Removing: /var/run/dpdk/spdk_pid78170 00:17:02.237 Removing: /var/run/dpdk/spdk_pid78413 00:17:02.237 Removing: /var/run/dpdk/spdk_pid78548 00:17:02.237 Removing: /var/run/dpdk/spdk_pid79889 00:17:02.237 Removing: /var/run/dpdk/spdk_pid80132 00:17:02.237 Removing: /var/run/dpdk/spdk_pid80265 00:17:02.237 Removing: /var/run/dpdk/spdk_pid81591 00:17:02.237 Removing: /var/run/dpdk/spdk_pid82014 00:17:02.237 Removing: /var/run/dpdk/spdk_pid82149 00:17:02.237 Removing: /var/run/dpdk/spdk_pid83579 00:17:02.237 Removing: /var/run/dpdk/spdk_pid83827 00:17:02.237 Removing: /var/run/dpdk/spdk_pid83956 00:17:02.237 Removing: /var/run/dpdk/spdk_pid85391 00:17:02.237 Removing: /var/run/dpdk/spdk_pid85640 00:17:02.237 Removing: /var/run/dpdk/spdk_pid85769 00:17:02.237 Removing: /var/run/dpdk/spdk_pid87204 00:17:02.237 Removing: /var/run/dpdk/spdk_pid87676 00:17:02.237 Removing: /var/run/dpdk/spdk_pid87811 00:17:02.237 Removing: /var/run/dpdk/spdk_pid87938 00:17:02.237 Removing: /var/run/dpdk/spdk_pid88339 00:17:02.237 Removing: /var/run/dpdk/spdk_pid89054 00:17:02.237 Removing: /var/run/dpdk/spdk_pid89423 00:17:02.237 Removing: /var/run/dpdk/spdk_pid90123 00:17:02.237 Removing: /var/run/dpdk/spdk_pid90548 00:17:02.237 Removing: /var/run/dpdk/spdk_pid91285 00:17:02.237 Removing: /var/run/dpdk/spdk_pid91683 00:17:02.237 Removing: /var/run/dpdk/spdk_pid93591 00:17:02.237 Removing: /var/run/dpdk/spdk_pid94018 00:17:02.237 Removing: /var/run/dpdk/spdk_pid94438 00:17:02.237 Removing: /var/run/dpdk/spdk_pid96467 00:17:02.237 Removing: /var/run/dpdk/spdk_pid96930 00:17:02.498 Removing: /var/run/dpdk/spdk_pid97436 00:17:02.498 Removing: /var/run/dpdk/spdk_pid98472 00:17:02.498 Removing: /var/run/dpdk/spdk_pid98789 00:17:02.498 Removing: /var/run/dpdk/spdk_pid99702 00:17:02.498 Clean 00:17:02.498 18:48:02 -- common/autotest_common.sh@1453 -- # return 0 00:17:02.498 18:48:02 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:17:02.498 18:48:02 -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:02.498 18:48:02 -- common/autotest_common.sh@10 -- # set +x 00:17:02.498 18:48:02 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:17:02.498 18:48:02 -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:02.498 18:48:02 -- common/autotest_common.sh@10 -- # set +x 00:17:02.498 18:48:02 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:17:02.498 18:48:02 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:17:02.498 18:48:02 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:17:02.498 18:48:02 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:17:02.498 18:48:02 -- spdk/autotest.sh@398 -- # hostname 00:17:02.498 18:48:02 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:17:02.758 geninfo: WARNING: invalid characters removed from testname! 00:17:24.817 18:48:25 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:17:28.109 18:48:27 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:17:30.014 18:48:29 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:17:31.922 18:48:32 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:17:33.829 18:48:34 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:17:35.735 18:48:36 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:17:37.643 18:48:37 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:17:37.643 18:48:37 -- spdk/autorun.sh@1 -- $ timing_finish 00:17:37.643 18:48:37 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:17:37.643 18:48:37 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:17:37.643 18:48:37 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:17:37.643 18:48:37 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:17:37.643 + [[ -n 6154 ]] 00:17:37.643 + sudo kill 6154 00:17:37.653 [Pipeline] } 00:17:37.669 [Pipeline] // timeout 00:17:37.675 [Pipeline] } 00:17:37.689 [Pipeline] // stage 00:17:37.695 [Pipeline] } 00:17:37.710 [Pipeline] // catchError 00:17:37.720 [Pipeline] stage 00:17:37.722 [Pipeline] { (Stop VM) 00:17:37.735 [Pipeline] sh 00:17:38.017 + vagrant halt 00:17:40.556 ==> default: Halting domain... 00:17:48.697 [Pipeline] sh 00:17:48.982 + vagrant destroy -f 00:17:51.529 ==> default: Removing domain... 00:17:51.542 [Pipeline] sh 00:17:51.825 + mv output /var/jenkins/workspace/raid-vg-autotest/output 00:17:51.835 [Pipeline] } 00:17:51.849 [Pipeline] // stage 00:17:51.854 [Pipeline] } 00:17:51.867 [Pipeline] // dir 00:17:51.872 [Pipeline] } 00:17:51.886 [Pipeline] // wrap 00:17:51.891 [Pipeline] } 00:17:51.904 [Pipeline] // catchError 00:17:51.913 [Pipeline] stage 00:17:51.915 [Pipeline] { (Epilogue) 00:17:51.927 [Pipeline] sh 00:17:52.215 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:17:56.429 [Pipeline] catchError 00:17:56.431 [Pipeline] { 00:17:56.444 [Pipeline] sh 00:17:56.730 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:17:56.730 Artifacts sizes are good 00:17:56.740 [Pipeline] } 00:17:56.755 [Pipeline] // catchError 00:17:56.767 [Pipeline] archiveArtifacts 00:17:56.774 Archiving artifacts 00:17:56.886 [Pipeline] cleanWs 00:17:56.898 [WS-CLEANUP] Deleting project workspace... 00:17:56.898 [WS-CLEANUP] Deferred wipeout is used... 00:17:56.905 [WS-CLEANUP] done 00:17:56.907 [Pipeline] } 00:17:56.923 [Pipeline] // stage 00:17:56.929 [Pipeline] } 00:17:56.943 [Pipeline] // node 00:17:56.949 [Pipeline] End of Pipeline 00:17:57.000 Finished: SUCCESS